Browsing by Department "DKU - Medical Physics Master of Science Program"
Results Per Page
Sort Options
Item Open Access A Comparative Study of Radiomics and Deep-Learning Approaches for Predicting Surgery Outcomes in Early-Stage Non-Small Cell Lung Cancer (NSCLC)(2022) Zhang, HaozhaoPurpose: To compare radiomics and deep-learning (DL) methods for predicting NSCLC surgical treatment failure. Methods: A cohort of 83 patients undergoing lobectomy or wedge resection for early-stage NSCLC from our institution was studied. There were 7 local failures and 16 non-local failures (regional and/or distant). Gross tumor volumes (GTV) were contoured on pre-surgery CT datasets after 1mm3 isotropic resolution resampling. For the radiomics analysis, 92 radiomics features were extracted from the GTV and z-score normalizations were performed. The multivariate association between the extracted features and clinical endpoints were investigated using a random forest model following 70%-30% training-test split. For the DL analysis, both 2D and 3D model designs were executed using two different deep neural networks as transfer learning problems: in 2D-based design, 8x8cm2 axial fields-of-view(FOVs) centered within the GTV were adopted for VGG-16 training; in 3D-based design, 8x8x8 cm3 FOVs centered within the GTV were adopted for U-Net’s encoder path training. In both designs, data augmentation (rotation, translation, flip, noise) was included to overcome potential training convergence problems due to the imbalanced dataset, and the same 70%-30% training-test split was used. The performances of the 3 models (Radiomics, 2D-DL, 3D-DL) were tested to predict outcomes including local failure, non-local failure, and disease-free survival. Sensitivity/specificity/accuracy/ROC results were obtained from their 20 trained versions. Results: The radiomics models showed limited performances in all three outcome prediction tasks. The 2D-DL design showed significant improvement compared to the radiomics results in predicting local failure (ROC AUC = 0.546±0.056). The 3D-DL design achieved the best performance for all three outcomes (local failure ROC AUC = 0.768 ± 0.051, non-local failure ROC AUC = 0.683±0.027, disease-free ROC AUC = 0.694±0.042) with statistically significant improvements from radiomics/2D-DL results. Conclusions: 3D-DL execution outperformed the 2D-DL in predicting clinical outcomes after surgery for early-stage NSCLC. By contrast, classic radiomics approach did not achieve satisfactory results.
Item Open Access A Convolutional Neural Network for SPECT Image Reconstruction(2022) Guan, ZixuPurpose: Single photon emission computed tomography (SPECT) is considered as a functional nuclear medicine imaging technique which is commonly used in the clinic. However, it suffers from low resolution and high noise because of the physical structure and photon scatter and attenuation. This research aims to develop a compact neural network reconstructing SPECT images from projection data, with better resolution and low noise. Methods and Materials: This research developed a MATLAB program to generate 2-D brain phantoms. We totally generated 20,000 2-D phantoms and corresponding projection data. Furthermore, those projection data were processed with Gaussian filter and Poisson noise to simulate the real clinical situation. And 16,000 of them were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To simulate the real clinical situation, there are five groups of projection data with decreasing acquisition views are used to train the network. Inspired by the SPECTnet, we used a two-step training strategy for network design. The full-size phantom images (128×128 pixels) were compressed into a vector (256×1) at first, then they were decompressed to full-size images again. This process was achieved by the AutoEncoder (AE) consisting of encoder and decoder. The compressed vector generated by the encoder works as targets in the second network, which map projection to compressed images. Then those compressed vectors corresponding to the projection were reconstructed to full-size images by the decoder. Results: A total of 10,000 testing dataset divided into 5 groups with 360 degrees, 180 degrees, 150 degrees, 120 degrees and 90 degrees acquisition, respectively, are generated by the developed neural network. Results were compared with those generated by conventional FBP methods. Compared with FBP algorithm, the neural network can provide reconstruction images with high resolution and low noise, even if under the limited-angles acquisitions. In addition, the new neural network had a better performance than SPECTnet. Conclusions: The network successfully reconstruct projection data to activity images. Especially for the groups whose view angles is less than 180 degrees, the reconstruction images by neural network have the same excellent quality as other images reconstructed by projection data over 360 degrees, even has a higher efficiency than the SPECTnet. Keywords: SPECT; SPECT image reconstruction; Deep learning; convolution neural network. Purpose: Single photon emission computed tomography (SPECT) is considered as a functional nuclear medicine imaging technique which is commonly used in the clinic. However, it suffers from low resolution and high noise because of the physical structure and photon scatter and attenuation. This research aims to develop a compact neural network reconstructing SPECT images from projection data, with better resolution and low noise. Methods and Materials: This research developed a MATLAB program to generate 2-D brain phantoms. We totally generated 20,000 2-D phantoms and corresponding projection data. Furthermore, those projection data were processed with Gaussian filter and Poisson noise to simulate the real clinical situation. And 16,000 of them were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To simulate the real clinical situation, there are five groups of projection data with decreasing acquisition views are used to train the network. Inspired by the SPECTnet, we used a two-step training strategy for network design. The full-size phantom images (128×128 pixels) were compressed into a vector (256×1) at first, then they were decompressed to full-size images again. This process was achieved by the AutoEncoder (AE) consisting of encoder and decoder. The compressed vector generated by the encoder works as targets in the second network, which map projection to compressed images. Then those compressed vectors corresponding to the projection were reconstructed to full-size images by the decoder. Results: A total of 10,000 testing dataset divided into 5 groups with 360 degrees, 180 degrees, 150 degrees, 120 degrees and 90 degrees acquisition, respectively, are generated by the developed neural network. Results were compared with those generated by conventional FBP methods. Compared with FBP algorithm, the neural network can provide reconstruction images with high resolution and low noise, even if under the limited-angles acquisitions. In addition, the new neural network had a better performance than SPECTnet. Conclusions: The network successfully reconstruct projection data to activity images. Especially for the groups whose view angles is less than 180 degrees, the reconstruction images by neural network have the same excellent quality as other images reconstructed by projection data over 360 degrees, even has a higher efficiency than the SPECTnet. Keywords: SPECT; SPECT image reconstruction; Deep learning; convolution neural network.
Item Open Access A Deep-Learning Method of Automatic VMAT Planning via MLC Dynamic Sequence Prediction (AVP-DSP) Using 3D Dose Prediction: A Feasibility Study of Prostate Radiotherapy Application(2020) Ni, YiminIntroduction: VMAT treatment planning requires time-consuming DVH-based inverse optimization process, which impedes its application in time-sensitive situations. This work aims to develop a deep-learning based algorithm, Automatic VMAT Planning via MLC Dynamic Sequence Prediction (AVP-DSP), for rapid prostate VMAT treatment planning.
Methods: AVP-DSP utilizes a series of 2D projections of a patient’s dose prediction and contour structures to generate a single 360º dynamic MLC sequence in a VMAT plan. The backbone of AVP-DSP is a novel U-net implementation which has a 4-resolution-step analysis path and a 4-resolution-step synthesis path. AVP-DSP was developed based on 131 previous prostate patients who received simultaneously-integrated-boost (SIB) treatment (58.8Gy/70Gy to PTV58.8/PTV70 in 28fx). All patients were planned by a 360º single-arc VMAT technique using an in-house intelligent planning tool in a commercial treatment planning system (TPS). 120 plans were used in AVP-DSP training/validation, and 11 plans were used as independent tests. Key dosimetric metrics achieved by AVP-DSP were compared against the ones planned by the commercial TPS.
Results: After dose normalization (PTV70 V70Gy=95%), all 11 AVP-DSP test plans met institutional clinic guidelines of dose distribution outside PTV. Bladder (V70Gy=6.8±3.6cc, V40Gy=19.4±9.2%) and rectum (V70Gy=2.8±1.8cc, V40Gy=26.3±5.9%) results in AVP-DSP plans were comparable with the commercial TPS plan results (bladder V70Gy=4.1±2.0cc, V40Gy=17.7±8.9%; rectum V70Gy=1.4±0.7cc, V40Gy=24.0±5.0%). 3D max dose results in AVP-DSP plans(D1cc=118.9±4.1%) were higher than the commercial TPS plans results(D1cc=106.7±0.8%). On average, AVP-DSP used 30 seconds for a plan generation in contrast to the current clinical practice (>20 minutes).
Conclusion: Results suggest that AVP-DSP can generate a prostate VMAT plan with clinically-acceptable dosimetric quality. With its high efficiency, AVP-DSP may hold great potentials of real-time planning application after further validation.
Item Open Access A Deep-Learning-based Multi-segment VMAT Plan Generation Algorithm from Patient Anatomy for Prostate Simultaneous Integrated Boost (SIB) Cases(2021) Zhu, QingyuanIntroduction: Several studies have realized fluence-map-prediction-based DL IMRT planning algorithms. However, DL-based VMAT planning remains unsolved. A main difficult in DL-based VMAT planning is how to generate leaf sequences from the predicted radiation intensity maps. Leaf sequences are required for a large number of control points and meet physical restrictions of MLC. A previous study1 reported a DL algorithm to generate 64-beam IMRT plans to approximate VMAT plans with certain dose distributions as input. As a step forward, another study2 reported a DL algorithm to generate one-arc VMAT plans from patient anatomy. This study generated MLC leaf sequence from thresholded predicted intensity maps for one-arc VMAT plans. Based on this study, we developed an algorithm to convert DL-predicted intensity maps to multi-segment VMAT plans to improve the performance of one-arc plans.
Methods: Our deep learning model utilizes a series of 2D projections of a patient’s dose prediction and contour structures to generate a multi-arc 360º dynamic MLC sequence in a VMAT plan. The backbone of this model is a novel U-net implementation which has a 4-resolution-step analysis path and a 4-resolution-step synthesis path. In the pretrained DL model, a total of 130 patients were involved, with 120 patients in the training and 11 patients in testing groups, respectively. These patients were prescribed with 70Gy/58.8Gy to the primary/boost PTVs in 28 fractions in a simulated integrated boost (SIB) regime. In this study, 7-8 arcs with the same collimator angle are used to simulate the predicted intensity maps. The predicted intensity maps are separated into 7-8 segments along the collimator angle. Hence, the arcs could separately simulate predicted intensity maps with independent weight factors. This separation also potentially allows MLC leaves to simulate more dose gradient in the predicted intensity mapsResults: After dose normalization (PTV70 V70Gy=95%), all 11 multi-segment test plans met institutional clinic guidelines of dose distribution outside PTV. Bladder (V70Gy=5.3±3.3cc, V40Gy=16.1±8.6%) and rectum (V70Gy=4.5±2.3cc, V40Gy=33.4±8.1%) results in multi-segment plans were comparable with the commercial TPS plan results. 3D max dose results in AVP-DSP plans(D1cc=112.6±1.9%) were higher than the commercial TPS plans results(D1cc=106.7±0.8%). On average, AVP-DSP used 600 seconds for a plan generation in contrast to the current clinical practice (>20 minutes).
Conclusion: Results suggest that multi-segment plans can generate a prostate VMAT plan with clinically-acceptable dosimetric quality. the proposed multi-segment plan generation algorithm has the capability to achieve higher modulation and lower maximum dose. With its high efficiency, multi-segment may hold great potentials of real-time planning application after further validation.
Item Open Access A New Method to Investigate RECA Therapeutic Effect(2020) Liu, XiangyuIntroduction: RECA (Radiotherapy Enhanced with Cherenkov photo- Activation) is a novel treatment that induces a synergistic therapeutic effect by combining conventional radiation therapy with phototherapy using the anti-cancer and potentially immunogenic drug, psoralen. This work presents a novel method to investigate the therapeutic effect of RECA using rat brain slices and the agarose- based tissue equivalent material. Methods: 4T1 mCherry Firefly Luciferase mouse breast cancer cells are placed on the brain slice after exposed to psoralen solution. Taking fluorescent imaging of the brain slices every day after irradiation, an independent luciferase imaging was taken after the fifth fluorescence imaging. Using different imaging processing and analysis method to identify the cells. Result: Four analyzing method give different result about the fluorescence signal or luminescence signal. The overall trend of the fluorescence signal is rising over day, reaches the lowest point at 48 hours after irradiation. Control group (no radiation and no Cherenkov lights) has the lowest signal compared with other groups. The signal of brain slices with 4T1 cells exposed to psoralen solution is lower than that of brain slices without psoralen exposition. Conclusion: This work shows that rat brain slice can be used to simulate in vivo environment in exploring the therapeutic effect of RECA. Future work should focus on improving the image analyze method to better identify cells and noises.
Item Open Access A novel technique to irradiate surgical scars using dynamic electron arc radiotherapy(2017) Addido, JohannesPurpose: The usage of conformal electron beam therapy techniques in treating superficial tumors on uneven surfaces has often lead to undesired outcomes such as non-uniform dose inside the target and a wide penumbra at boundary of the target. The dynamic electron arc radiotherapy (DEAR) technique has been demonstrated to improve dose distribution and minimize penumbra. The aim of this study is to investigate the feasibility and the accuracy of DEAR technique in irradiating surgical scars.
Method: 3D scar coordinates, a series of connected points along a line extracted from CT images were used. A treatment plan was designed to irradiate the scar with a uniform dose. An algorithm was developed to produce a DEAR plan consisting of control points (CP) corresponding to various positions along machine mechanical axes as a function of MU. Varian’s Spreadsheet based Automatic Generator (SAGE) software was used to verify and simulate the treatment and also to generate the plan in XML format. XML file was loaded on a TrueBeam Linac in research mode for delivery. The technique was demonstrated in i) a straight line scar on the surface of a solid water phantom and ii) curved scar on the surface of a cylindrical phantom. Energy used was 6MeV and a 6x6 cm2 applicator fitted with a 3x3 cm2 cutout. Dose at the surface and dmax were measured with Gafchromic film. Dose profiles calculated from the Eclipse eMC and Virtual Linac Monte Carlo tool were compared to film dose measurement.
Results: The dose profile analysis show that the TrueBeam Linac can deliver the designed plans for both straight line and arc scars to a high degree of accuracy. The root mean square error (RMSE) value for the line scar is approximately 0.00350 Gantry angle and it is 0.0349 for the arc scar. This is due to the fact that in the straight line delivery the gantry angle is static so it has a
higher degree of agreement than in the arc delivery. RMSE values for the straight line scar has an overall high degree of agreement compared to the arc scar because the arc scar delivery has more mechanical axes motion during delivery.
Conclusion: The DEAR technique can be used to treat various line targets (scars) i.e. straight or curved lines, to a high degree of accuracy. This treatment modality can help reinvigorate electron therapy and make it a clinically viable treatment option.
Item Open Access A Prospective Method for Selecting the Optimal SPECT Pinhole Trajectory(2021) Tao, XiangzhiAbstractPinhole imaging is a widely used method for high spatial resolution single gamma imaging with a small required field of view (FOV). Many factors affect pinhole imaging: (I) the geometric parameters of the pinhole imaging system, such as the pinhole diameter, focal length and opening angle; (II) the position, range, sampling interval, and sampling time of the pinhole trajectory; and (III) the image reconstruction algorithm. These differences result in different trade-offs between resolution, sensitivity, noise level, imaging FOV, and data-sampling integrity levels. In pinhole imaging, many different pinhole trajectories might be considered. The conventional approach to assessing different trajectories is to reconstruct images from the various trajectories and then assess which image is best. Such an approach is however time consuming, since (I) image reconstruction is time-consuming and (II) image analysis often requires ensembles of images, where the ensemble is time consuming to calculate, consumes considerable computer storage, and requires investigator time to organize and analyze. The object of this project is to develop a method to rapidly select the optimal SPECT pinhole trajectory from among several candidate trajectories. Equivalent Resolution Geometric Efficiency (ERGE) is proposed to represent the spatial resolution and geometric efficiency; a higher ERGE means a better trajectory. To verify this metric, two-dimensional and three-dimensional visualizations of the pinhole trajectory are implemented in software as a way to assess trajectories visually and qualitatively. Several different trajectories are employed, projection data are computer-simulated, including spatial resolution blurring and pseudo-random Poisson noise, and image reconstruction is performed using the OSEM algorithm. The reconstructed images are analyzed to characterize the performance of the different trajectories to assess whether the best trajectory can be determined by the sensitivity and resolution characteristics of the individual pinhole locations that make up the trajectory. Ultimately, the method proved to be effective. In this study, a relatively simple low-cost prospective method for selecting the optimal SPECT pinhole trajectory has been shown to be effective. Only very fast and simple calculations, utilizing Microsoft Excel for example, are required. The method does not require simulating or acquiring projection data and does not require image reconstruction. The ranking of ERGE matches well with the ranking of reconstructed images based on Root Mean Square Error (RMSE). In clinical and scientific research, many different pinhole trajectories might be considered for pinhole 3D SPECT imaging, but it is too time-consuming to assess each trajectory via reconstructed images. By demonstrating the validity of this method for assessing trajectories, it may facilitate the improved use of 3D pinhole SPECT imaging in clinical and scientific research. Keywords: Pinhole Trajectory, SPECT, Forward Projection, OSEM, Equivalent Resolution Geometric Efficiency (ERGE), Root Mean Square Error (RMSE).
Item Open Access A Tool for Approximating Radiotherapy Delivery via Informed Simulation (TARDIS)(2020) Chuang, Kai-ChengPurpose: The multi-leaf collimator (MLC) is a critical component in intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). The MLC discrepancies between planned and actual position directly affects the quality of treatment. This study analyzed MLC positional discrepancies and gantry angle discrepancies via trajectory log files from Varian TrueBeam linear accelerators to determine the consistency of machine performance accuracy over the course of treatment. The mechanical parameters that affect accuracy were examined and evaluated to build a machine learning algorithm to predict moving components’ discrepancies. A tool was developed to predict the treatment delivery discrepancies on a Varian TrueBeam linear accelerator for any given plan, which can simulate radiotherapy treatment delivery without actual delivery.
Methods: Trajectory log files of 116 IMRT plans and 125 VMAT plans from nine Varian TrueBeam linear accelerators were collected and analyzed. Data was binned by treatment site and machine type to determine their relationship with MLC and gantry angle discrepancies. Trajectory log files were used to evaluate whether MLC positional accuracy was consistent between patient-specific quality assurance (QA) and the course of treatment. Mechanical parameters including MLC velocity, MLC acceleration, gantry angle, gantry velocity, gantry acceleration, collimator angle, control point, dose rate, and gravity vector were analyzed to evaluate correlations with delivery discrepancies. A regression model was used to develop a machine learning algorithm to predict delivery discrepancies based on mechanical parameters.
Results: MLC discrepancies at pre-treatment patient-specific QA differed from the course of treatment by a small (mean = 0.0031 ± 0.0036 mm, p = 0.0089 for IMRT; mean = 0.0014 ± 0.0016 mm, p = 0.0003 for VMAT) but statistically significant amount, likely due to setting the gantry angle to zero for QA. Mechanical parameters showed significant correlation with MLC discrepancies, especially MLC velocity, which had an approximately linear relationship (β = -0.0027, R2 = 0.79). Incorporating other mechanical parameters, the final generalized model trained by data from all linear accelerators can predict MLC errors to a high degree of accuracy with high correlation (R2 = 0.86) between predicted and actual errors. The same prediction model performed well across different treatment sites and linear accelerators; however, a significant difference was found in the predictions made by models trained using different treatment techniques (IMRT vs VMAT) (mean difference of RMSE = 0.0153 ± 0.0040 mm).
Conclusion: We have developed a machine learning model using prior trajectory log files to predict the MLC discrepancies on TrueBeam linear accelerators. This model has been a released as a research tool in which a DICOM-RT with predicted MLC positions can be generated using the original DICOM-RT file as input. This tool can be used to simulate radiotherapy treatment delivery and may be useful for studies evaluating plan robustness and dosimetric uncertainties from treatment delivery.
Item Open Access An Investigation in Quantitative Accuracy in Preablation I-131 Scans: 7-pinhole system compared with single-pinhole system.(2018) Yu, TingtingPurpose: Early detection and prevention of differentiated thyroid cancer using thyroidectomy and ablation therapy can reduce disease persistence and recurrence. A preablation I-131 scan performed between the thyroidectomy and ablation therapy may improve patient management. Although I-123 would involve less radiation dose, I-131 is widely available, and its long half-life enables imaging 24-72 hours after injection, which is crucial for dosimetry and which enhances visibility of distant metastasis. Typically, 2-10 mCi I-131 is administered. To avoid stunning effects, 2 mCi is suggested. However, with 2 mCi, images are noisy. Compared with standard single pinhole SPECT systems, 7-pinhole systems may provide greater geometric efficiency when using smaller pinhole diameter and thereby reduce noise. Due to the size of 7-pinhole systems, collision constraints may, however, increase the pinhole radius of rotation (ROR), thereby reducing efficiency. Herein we assess the competing effects of more pinholes and larger ROR to determine whether 7 pinholes could meaningfully improve efficiency, at a comparable or better spatial resolution.
Methods: Radiotracer distributions and attenuation were computer simulated using modified XCAT phantom. Single-pinhole and 7-pinhole trajectories were developed to provide minimal RORs while avoiding collision with the patient. Reconstructed images were computer simulated, modeling attenuation, spatial resolution, and Poisson noise. Single-pinhole and 7-pinhole were compared for a range of lesion sizes and activity concentrations. Comparison metrics included lesion conspicuity, uniformity, and contrast; and image quality in terms of noise, contrast recovery and RMSE. Gamma camera sensitivity and spatial resolution were also assessed.
Results: In this study, seven-pinhole configurations were compared to a clinically typical single-pinhole system. In the low-count study, it was found that the seven-pinhole system with 4-5 mm pinhole diameter could outperform the benchmark single-pinhole system. In the high-count study, it was found that seven-pinhole system with 3 mm pinhole diameter could outperform the benchmark single-pinhole system. However, ROR increases are great enough to substantially decrease the benefit of seven pinholes, for the pinhole configuration considered here.
Conclusion: Seven pinhole maybe suitable for preablation scan because high sensitivity allows better detect the lesion with low activity concentration and smaller pinhole diameter allows better resolve the metastasis.
Item Open Access An Investigation of Machine Learning Methods for Delta-radiomic Feature Analysis(2018) Chang, YushiBackground: Radiomics is a process of converting medical images into high-dimensional quantitative features and the subsequent mining these features for providing decision support. It is conducted as a potential noninvasive, low-cost, and patient-specific routine clinical tool. Building a predictive model which is reliable, efficient, and accurate is a vital part for the success of radiomics. Machine learning method is a powerful tool to achieve this. Feature extraction strongly affects the performance. Delta-feature is one way of feature extraction methods to reflect the spatial variation in tumor phenotype, hence it could provide better treatment-specific assessment.
Purpose: To compare the performance of using pre-treatment features and delta-features for assessing the brain radiosurgery treatment response, and to investigate the performance of different combinations of machine learning methods for feature selection and for feature classification.
Materials and Methods: A cohort of 12 patients with brain treated by radiosurgery was included in this research. The pre-treatment, one-week post-treatment, and two-month post-treatment T1 and T2 FLAIR MR images were acquired. 61 radiomic features were extracted from the gross tumor volume (GTV) of each image. The delta-features from pre-treatment to two post-treatment time points were calculated. With leave-one-out sampling, pre-treatment features and the two sets of delta-features were separately input into a univariate Cox regression model and a machine learning model (L1-regularized logistic regression [L1-LR], random forest [RF] or neural network [NN]) for feature selection. Then a machine learning method (L1-LR, L2-regularized logistic regression [L2-LR], RF, NN, kernel support vector machine [Kernel-SVM], linear-SVM, or naïve bayes [NB]) was used to build a classification model to predict overall survival. The performance of each model combination and feature type was estimated by the area under receiver operating characteristic (ROC) curve (AUC).
Results: The AUC of one-week delta-features was significantly higher than that of pre-treatment features (p-values < 0.0001) and two-month delta-features (p-value= 0.000). The model combinations of L1-LR for feature selection and RF for classification as well as RF for feature selection and NB for classification based on one-week delta-features presented the highest AUC values (both AUC=0.944).
Conclusions: This work potentially implied that the delta-features could be better in predicting treatment response than pre-treatment features, and the time point of computing the delta-features was a vital factor in assessment performance. Analyzing delta-features using a suitable machine learning approach is potentially a powerful tool for assessing treatment response.
Item Open Access An Investigation of MR Sequences for Partial Volume Correction in PET Image Reconstruction(2019) Wang, GongBrain Positron emission tomography (PET) has been widely employed for the clinic diagnosis of Alzheimer's disease (AD). Studies have shown that PET imaging is helpful in differentiating healthy elderly individuals, mild cognitive impairment (MCI) individuals, and AD individuals (Nordberg, Rinne, Kadir, & Långström, 2010). However, PET image quality and quantitative accuracy is degraded from partial volume effects (PVEs), which are due to the poor spatial resolution of PET. As a result, the compensation of PVEs in PET may be of great significance in the improvement of early diagnosis of AD. There are many different approaches available to address PVEs including region-based methods and voxel-based methods. In this study, a voxel-based PVE compensation technique using high-resolution anatomical images was investigated. The high-resolution anatomical images could be computed tomography (CT) or magnetic resonance imaging (MRI) images. Such methods have been proposed and investigated in many studies (Vunckx et al., 2012). However, relatively little research has been done on comparing the effects of different MRI images on voxel-based PVE correction methods. In this study, we compare the effect of 6 different MRI image protocols on PVE compensation in PET images. The MRI protocols compared in this study are T1-, T2-, proton-density (PD)-weighted and 3 different inversion recovery MRI protocols.
Results: OSEM and MAP/ICD images with isotropic prior are blurry and/or noisy. Compared with the OSEM and MAP/ICD images obtained by using an isotropic prior, the PET image reconstructed using anatomical information show better contrast and less noise. Visually, the PET image reconstructed with the ZeroCSF prior gave the PET image that visually appears to match best with the PET phantom. PET images reconstructed with T2, PD and ZeroWM image are similar to one another in image quality, but relative to the PET phantom and the ZeroCSF PET image, these images have poor contrast between CSF pockets and surrounding GM tissue, and they have less contrast between GM and WM. PET image reconstructed with T1 image had a better GM and CSF contrast, some of the CSF pockets in GM were reconstructed, but the WM region was very noisy. PET images reconstructed with ZeroGM image had noticeably worse performance on the GM reconstruction. Analysis suggest that these effects are caused by differences in tissue contrast with different MRI protocols
Keywords: PET, MRI, partial volume effect, image reconstruction, SPECT, Alzheimer's disease.
Item Open Access Automatic Pulmonary Nodule Detection and Localization from Biplanar Chest Radiographs Using Convolutional Neural Network(2019) Hu, Shen-ChiangChest x-ray (CXR) is the most common examination in pulmonary nodule detection and an automatic nodule detection algorithm is desirable. Currently, convolutional neural network (CNN) is widely applied in CXR. However, there is a lack of dataset with clear nodule annotation, also the small size of pulmonary nodules hampers its performance,
finally, there is no study of lung nodule detection utilizing end-to-end CNN model and lateral CXR images. In this study, coronal and lateral CXR images were generated from CT phantom for training separately, and U-Net architecture CNN models were implemented with modifying a number of convolutional layers, adding shortcut connection, using weighted loss function and the impact of these modifications was evaluated on model performance. Finally, the models were tested on a test set under the condition of different nodule diameter, number, and location. In CT phantom dataset, U-Net trained with the residual unit and weighted loss showed the capability in detecting 5 mm nodules and increased training speed. Overall, model trained with coronal images provided better detection result than using lateral images, but their outputs could be combined to obtain nodule localization information in 3D. The number of nodules and adjacency of nodules has no prominent effect on detection, however, models were prone to failure when the nodule was too small (< 5 mm), was close to the edges of the lung, or was overlapped with moderate to the high-density anatomic structure.
Item Open Access Automatic Treatment Planning for Multi-focal Dynamic Conformal Arc GRID Therapy for Late Stage Lung Cancer: A Feasibility Study(2023) You, YuanPurpose: To develop a heuristic greedy algorithm to generate automatic multi-leaf collimator (MLC) sequencing for spatial fractioned radiation therapy (SFRT) using 3D dynamic conformal arc (DCA).Methods and materials: One late stage lung cancer patient with simulated sphere target grid was included in this study. N_t spheres were equally spaced within the gross target volume (GTV). The sphere targets are 1.5 cm in diameter, 4.3 cm spacing for 6,9,10, and 12 targets scenarios, and 2.8cm spacing for one special 10 targets scenario. Optimization was designed to complete within one coplanar arc from 180° to 0° in a clockwise direction with 2° as the angle interval. The problem is formalized as finding optimal MLC sequencing to cover N_t targets with K control points (CPs) for each arc. The state of each target’s MLC opening at each CP is binary. The original NP-hard problem can be approximated to a feasible subproblem by the greedy approximation on each control point and using the heuristic approach for the initial point. The algorithm focuses on the normalized relative dose relationship as the object function during the optimization. The dose matrix for each step was rasterized and grouped based on Monte Carlo simulation as the pre-calculation process. The physical speed limitation of the MLC motion was considered in the optimization to achieve a realistic and deliverable final MLC sequencing solution. Four grid arrays (6, 9, 10, and 12 targets respectively) were tested for plan quality. The arc collimator angle was planned with both 0 and 30 degrees for comparison. Prescription was set to 20 Gy to one fraction. The delivered dose will be normalized to equalize the minimum target dose to the prescription dose. Key dosimetric endpoints including target mean dose, D5, and D95, were reported. Results: The complexity of this algorithm has been reduced by a factor of \frac{2^K}{2\left(K-1\right)}. The D95 deviations of all targets as the main focus object were within 2.88% in four grid arrays with 0°/30° collimator rotation angles, 4.3 cm spacing for 6,9,10 and 12 targets scenarios, and 2.8 cm spacing for one special 10 targets scenario. For all scenarios with 4.3 cm spacing, the mean valley-to-peak ratios were under 0.45 and were within the constraint that the dose of the other part of the tumor is no more than 45% of the max normalized D95 delivered target dose during the algorithm optimization. Conclusion: This algorithm is a feasible and practical method with high efficiency while delivering the prescription dose to small target volume for late stage cancer palliative management. The proposed solution provides decent coverage to the tumor volume as well as the valley-to-peak ratio. It provides a competitive alternative solution to the standard alloy grid delivery technique.
Item Open Access Build 5DCT by Connecting Cardiac ECG 4DCT with Respiratory 4DCT for Heart Motion Management in Stereotactic Tachycardia Radiosurgery(2023) Liu, ShiyiPurpose: To develop a generic procedure to make 5DCT from ECG 4DCTs and respiratory 4DCTs of cardiac RT patients. The 5DCT, whose dimension consists of 3D volume, cardiac cycle and respiratory cycle, will be used for quantitatively evaluating respiratory and cardiac motion of the heart, and supporting cardiac RT motion management, 5D dose calculation and dosimetry motion assessment. Methods: Images of ECG 4DCTs and respiratory 4DCTs for cardiac RT patients were obtained from the clinical system with IRB approval. For each patient, ten ECG 4DCT phases were registered using the groupwise deformable image registration algorithm GroupRegNet. The results were the template ECG CT representing the accurate average heart anatomy rather than an intensity-averaged CT, and the cardiac 4D DVF (deformation vector field). The ECG CT template and ten respiratory 4DCT phases were registered together using the 2nd groupwise registration to compute the 2nd respiratory 4D DVF. The computed DVFs from two groupwise registrations connected ECG 4DCTs to respiratory 4DCTs. A 10x10 cardiorespiratory 5DCT volume was generated by warping the ECG phases using composed DVFs. The final 5DCT phases were manually evaluated by visually the checking the respiratory and cardiac motion of the heart chambers. Results: The 5DCT generation procedure was implemented using Python and MATLAB, and was successfully applied to 4DCT images from five cardiac RT patients. The registration results were satisfactory based on visual evaluation. The quantitative evaluation and 5D dose calculation are planned for future work. Conclusion: A practical and effective procedure was developed to assess 5D motion of the heart and generate 5DCT phases from the clinical ECG 4DCTs and respiratory 4DCTs. The generated 5DCT could be used in dose calculation to assess the effect of 5D motion of the heart chambers on dosimetry for cardiac RT treatments.
Item Open Access Building a patient-specific model using transfer learning for 4D-CBCT augmentation(2020) sun, leshanPurpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase‐resolved volumetric images in aid of image guided radiation therapy (IGRT), especially in SBRT, which requires highly accurate dose delivery. However, 4D-CBCT suffers from insufficient projection data in each phase bin, which leads to severe noise and artifact. To address this problem, deep learning methods have been introduced to help with augmenting image quality. However, when using traditional deep learning methods to augment CBCT images, the augmented images tend to lose small details such as lung textures. In this study, transfer learning method was proposed to further improve the image quality of the deep-learning augmented CBCT for one specific patient.
Methods: The network architecture used in this project for transfer learning is a standard U-net. CBCT images were reconstructed using limited projections that are simulated from ground truth CT images or directly from clinic. For transfer learning training process, the network was firstly fed with different patients’ data in order to learn a general restoration process to augment under-sampled CBCT images from any patients. Then, the restoration pattern was improved for one specific patient by re-feeding the network with this patient’s data from prior days. Performance of transfer learning was evaluated by comparing the augmented CBCT images to the traditional deep learning method’s images both qualitatively and quantitatively using structure similarity index matrix (SSIM) and peak signal-to-noise ratio (PSNR).
Regarding the study of effectiveness and time efficiency of transfer learning methods, two transfer learning methods, whole-layer fine tuning and layer-freezing methods, are compared to each other. Two training methods, whole-data tuning and sequential tuning were employed as well to further explore the possibility of improving transfer learning’s performance and reducing training time.
Results: The comparison demonstrated that the images augmented from transfer learning method not only recovered more detailed information in lung area but also had more uniform pixel value than basic U-net images when comparing to the ground truth. In addition, two transfer learning methods, whole-layers fine-tuning and layer-freezing method, and two training method, sequential training and all data training, were compared to each other, and all data training with layer-freezing method was found to be time-efficient with training time as short as 10 minutes. In the study of projection number’s effect, transfer-learning augmented CBCT images reconstructed from as low as 90 projection out of 900 projections showed its improvement from U-net augmented images.
Conclusion: Overall, transfer learning based image augmentation method is efficient and effective on improving image qualities of augmented under-sampled 3D/4D-CBCT images from traditional deep-learning methods. Given its relatively fast computational speeds and great performance, it can be very valuable for 4D image guided radiation therapy.
Item Open Access Cone Beam Computed Tomography Image Quality Augmentation using Novel Deep Learning Networks(2019) Zhao, YaoPurpose: Cone beam computed tomography (CBCT) plays an important role in image guidance for interventional radiology and radiation therapy by providing 3D volumetric images of the patient. However, CBCT suffers from relatively low image quality with severe image artifacts due to the nature of the image acquisition and reconstruction process. This work investigated the feasibility of using deep learning networks to substantially augment the image quality of CBCT by learning a direct mapping from the original CBCT images to their corresponding ground truth CT images. The possibility of using deep learning for scatter correction in CBCT projections was also investigated.
Methods: Two deep learning networks, i.e. a symmetric residual convolutional neural network (SR-CNN) and a U-net convolutional network, were trained to use the input CBCT images to produce high-quality CBCT images that match with the corresponding ground truth CT images. Both clinical and Monte Carlo simulated datasets were included for model training. In order to eliminate the misalignments between CBCT and the corresponding CT, rigid registration was applied to clinical database. The binary masks achieved by Otsu auto-thresholding method were applied to for Monte Carlo simulate data to avoid the negative impact of non-anatomical structures on images. After model training, a new set of CBCT images were fed into the trained network to obtain augmented CBCT images, and the performances were evaluated and compared both qualitatively and quantitatively. The augmented CBCT images were quantitatively compared to CT using the peak-signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM).
Regarding the study for using deep learning for the scatter correction in CBCT, the scatter signal for each projection was acquired by Monte Carlo simulation. U-net model was trained to predict the scatter signals based on the original CBCT projections. Then the predicted scatter components were subtracted from the original CBCT projections to obtain scatter-corrected projections. CBCT image reconstructed by the scatter-corrected projections were quantitatively compared with that reconstructed by original projections.
Results: The augmented CBCT images by both SR-CNN and U-net models showed substantial improvement in image quality. Compared to original CBCT, the augmented CBCT images also achieve much higher PSNR and SSIM in quantitative evaluation. U-net demonstrated better performance than SR-CNN in quantitative evaluation and computational speed for CBCT image quality augmentation.
With the scatter correction in CBCT projections predicted by U-net, the scatter-corrected CBCT images demonstrated substantial improvement of the image contrast and anatomical details compared to the original CBCT images.
Conclusion: The proposed deep learning models can effectively augment CBCT image quality by correcting artifacts and reducing scatter. Given their relatively fast computational speeds and great performance, they can potentially become valuable tools to substantially enhance the quality of CBCT to improve its precision for target localization and adaptive radiotherapy.
Item Embargo CT-Based Thyroid Cancer Diagnosis using Deep Learning and Radiomics Fusion Method(2024) Dong, YunfeiPurposeThe aim of this study was to address the limitations observed in past research, particularly the limited accuracy of individual deep learning or radiomics methods in small datasets. By developing a fusion approach that integrates the two techniques, we hypothesized that the performance in CT-based thyroid cancer diagnosis could be improved. Materials and Methods Eighty-five patients with thyroid tumors (58 malignant, 27 benign) who underwent CT scans were included in this study. The dataset was divided into training (70%) and testing (30%). A shallow CNN model, including five convolutional layers and two fully connected layers, was developed for tumor classification. Radiomics features were extracted and selected using the pyradiomics package and statistical tests (T-test, etc.). These features were then utilized to develop a Multiple Logistic Regression (MLR) model for tumor classification. The CNN and MLR models were combined using a fusion method that calculates the weighted sum of each diagnostic output for classification. The accuracy of the diagnostic methods was evaluated for both the individual and combined fusion models. The statistical significance of the weighted combination model was examined using the Wilcoxon-Test. Results The CNN model achieved an accuracy of 82.713%, and the MLR model achieved an accuracy of 76.596%. The accuracy of the fusion model reached 85.372%, suggested the improvement of performance of the fusion approach over the individual models. The Wilcoxon-Test yielded a W-Statistic of 19410.0 and a p-value of 〖2.96×10〗^(-14), which is below the threshold of 0.05. Conclusion A fusion model combining deep learning and radiomics methods was developed and showed improved accuracy in thyroid tumor diagnosis in a small dataset. The results showed a statistically significant difference between the fusion model and the individual models.
Item Embargo Deep-Learning-Based Auto-Segmentation for Cone Beam Computed Tomography (CBCT) in Cervical Cancer Radiation Therapy(2024) Wu, YuduoBackground: Cervical cancer is a common gynecological malignancy among women worldwide. Among the primary modalities for treating cervical cancer, radiation therapy occupies a central role. Using Cone-Beam Computed Tomography (CBCT) scans obtained prior to treatment for target registration and alignment holds critical significance for precision radiation therapy. Accurately contouring targets and critical-organs-at risk (OARs) is the most time-consuming task for radiation oncologists. The OAR contouring in CBCT plays a crucial role in the radiotherapy of cervical cancer. Specifically, the location and volume of the rectum and bladder can significantly impact the precision of cervical cancer treatment, as the patients need to drink certain amount of water to fill the bladder prior to the treatment for target localization. The resulting change in position of rectum and bladder may lead to alterations in the target dose. Further, changes in radiation dose to these two OARs can directly affect the severity of the acute and late radiation induced damage. Therefore, the OAR contouring not only allows for better localization before each radiotherapy session, but also provides valuable reference for clinicians when they need to adjust the treatment plan.Purpose: The objective of this study is to evaluate the capabilities of four deep-learning models for contouring OARs in CBCT images of cervical cancer patients. Materials and Methods: The study dataset comprising 40 sets of CBCT images were collected from the Fujian Provincial Cancer Hospital in China. Two experienced radiation oncologists meticulously delineated 10 groups of OARs (Body, Bladder, Bone Marrow, Bowel Bag, Femoral Head L, Femoral Head R, Femoral Head and Neck L, Femoral Head and Neck R, Rectum, Spinal Canal) on the CBCT images as reference/ground truth. Subsequently, the 24 sets of CBCT reference were used to train the CBCT model, and the unedited CBCT images of the remaining 16 sets were used for comparing with their reference to test the four models. The only difference between these four models is the adoption of different neural network structures. They are classic U-Net, Flex U-Net, Attention U-Net (ATT), and SegResNet respectively. The evaluation of contouring quality for the four models was performed using the metrics such as 95 percentile Hausdorff Distance (HD95), Dice Similarity Coefficient (DICE), Average Symmetric Surface Distance (ASSD), Maximum Symmetric Surface Distance (MSSD), and Relative Absolute Volume Difference (RAVD), respectively. Results: The average DICE was 0.86 for bladder contouring among four models. The average DICE for rectum on CBCT image was 0.84 for four models. Conclusion: According to the quantitative analysis, classic U-Net neural network architecture with minor adjustments can obtain competitive segmentation on CBCT images.
Item Open Access Deriving Lung Ventilation MAP Directly from Auto Segmented CT Images Using Deep Convolutional Neural Network (CNN)(2022) Li, NanLung cancer has been the most commonly occurring cancer (J. Ferlay, 2018), with the highest fatality rate worldwide. Lung cancer patients undergoing radiation therapy typically experience many side effects. In order to reduce the adverse effects, lung function (ventilation state)-guided radiation therapy has been highly recommended. “Functional Lung Avoidance Radiation Therapy” (FLA-RT) can selectively avoid high-dose irradiation to the well-functioning region of the lungs and reduce lung injury during RT (Azza Ahmed Khalil, 2021). FLA-RT, however, needs information on lung function for the treatment process. The conventional techniques that acquire lung function map (S. W. Harders, 2013) include 99mTc SPECT technique (Suga, 2002), 99mTc MRI technique (LindsayMathew, 2012), 68Ga PET technique (Jason Callahan, 2013). Nevertheless, these techniques have the following issues: high cost, labor-intensive in the preparation process, and low accessibility for the radiation oncology departments.This research is aimed to investigate whether the lung function images could be generated from routine planning CT images using CNN. This study will also develop an image segmentation method to automatically and accurately segment lung volume for the chest CT images. This study retrospectively analyzed 99mTc DTPA SPECT scans of 21 cases. These were open-source data from "VAMPIRE (Ventilation Archive for Medical Pulmonary Image Registration Evaluation)" established by John Kipritidis, Henry C. Woodruff, and Paul J. Keall from the Radiation Physics Laboratory of the University of Sydney in Australia. The sizes for CT images and the reference mask images were 512 ⅹ 512 matrices with the pixel size of 2.5 ⅹ 2.5 mm2 and 3 mm slice thickness. The SPECT images were reconstructed in 512 ⅹ 512 matrices with 2.5 ⅹ 2.5 ⅹ 2.5 mm3 voxel size. CT, reference mask, and SPECT images are all in ". mha" data format for each study case. This study contains two major components. First, a deep-learning model was developed to auto-segment the lung region from the CT images. Second, another deep-learning model was developed to use the segmented lung CT image as input to predict lung ventilation function map. In order to accomplish the first task of this study, we used the CT images as the “network input” and the reference mask images as the "network-output." We then trained them with a designated 2D U-shape backbone network and successfully generated the first model. For testing the model performance, Pixel Accuracy, Pixel Recall, Pixel Precision, and Intersection of Union (IOU) were used as assessment criteria to evaluate the quality of model-generated lung masks based on the ground truth masks. In order to accomplish the second task, we used the segmented lung CT images as the “network input” and SPECT images as the "network-output." We then trained them with another designated 3D U-shape backbone network and successfully generated the second model. For testing the performance of the second model, the correlation coefficient (Spearman's coefficient) (Piantadosi) was used as assessment criteria to evaluate the correlation between the model-generated lung function images and the ground truth SPECT images. In order to achieve the optimal outcome, this study applied parallel studies that compared the influence of different training strategies on the outcome (see Chapter 3.3.2). The different train strategies include two aspects for DL Model 1 and four aspects for DL Model 2. Training the designed network with three-channels data as input provided the best results for image segmentation. For test case 1, the Pixel Accuracy is 0.935±0.033, the Pixel recall is 0.942±0.029, Pixel Precision is 0.942±0.032, and IoU is 0.891±0.042. For test case 2, the Pixel Accuracy is 0.950±0.024, the Pixel recall is 0.961±0.015, Pixel Precision is 0.943±0.028, and IoU is 0.909±0.036. For “deriving lung function images,” training the designed network using the ground truth mask to segment the chest CT with [-1,1] normalization and 32 ⅹ 32 ⅹ 64 training patch size as inputs provided the best results. The Spearman's correlation coefficients for cases 1 and 2 got 0.8689±0.038 and 0.8716±0.036, respectively. The preliminary study using the designed U-shape backbone convolutional neural networks (CNNs) achieved satisfactory auto-segmentation results and derived promising results of the lung function map. It indicates the feasibility of directly deriving the lung ventilation state (SPECT-like images) from CT images. The CNN-derived “SPECT-like” lung functional images might be used to reference FLA-RT.
Item Open Access Developing a Platform to Analyze the Dependence of Radiomic Features on Different CT Imaging Parameters(2023) Wang, ShenglePurpose: To develop a framework to quantify the impact of different CT imaging parameters on the variations of radiomic features and to investigate the effectiveness of different image processing methods on the reduction of radiomic feature variations.
Method: A publicly available CT image dataset (Credence Cartridge Radiomics Phantom CT scans) acquired on a phantom consisting of different texture materials was used in this study. 251 scans were divided into 5 groups. In each group, only one of the following imaging parameters changed: slice thickness, pixel size, convolutional kernel, exposure (mAs), and scanner model. 92 radiomic features from intensity, intensity histogram, GLCM, GLRLM, and GLSZM groups were extracted from the same region of interest (ROI) in each scan using an in-house application. The coefficient of variation (CV) was used to measure the variation of radiomic features due to each imaging parameter. Three preprocessing methods, resampling, gray level rebinning, and image filtering were tested for their effectiveness in reducing feature variations.
Result: The proposed workflow identified individual features and groups of features with high variation and showed responses of each feature to different image preprocessing methods. The convolutional kernel of scanners caused the largest variations in calculated features in general, while exposure had low influence on features in all categories. GLSZM features showed higher sensitivity to pixel size and slice thickness due to the dependence of number of voxels in a gray level zone to the voxel size. Image preprocessing did not improve feature robustness in most cases.
Conclusion: This study demonstrated the ability to reveal the relationships between radiomic feature variations, imaging parameters, and image correction methods. The proposed workflow can be used to study the feature robustness prior to the application of any radiomic features in multi-institutional studies.
- «
- 1 (current)
- 2
- 3
- »