Browsing by Subject "deep learning"
Results Per Page
Sort Options
Item Open Access Contour interpolation by deep learning approach.(Journal of medical imaging (Bellingham, Wash.), 2022-11) Zhao, Chenxi; Duan, Ye; Yang, DeshanPurpose
Contour interpolation is an important tool for expediting manual segmentation of anatomical structures. The process allows users to manually contour on discontinuous slices and then automatically fill in the gaps, therefore saving time and efforts. The most used conventional shape-based interpolation (SBI) algorithm, which operates on shape information, often performs suboptimally near the superior and inferior borders of organs and for the gastrointestinal structures. In this study, we present a generic deep learning solution to improve the robustness and accuracy for contour interpolation, especially for these historically difficult cases.Approach
A generic deep contour interpolation model was developed and trained using 16,796 publicly available cases from 5 different data libraries, covering 15 organs. The network inputs were a 128×128×5 image patch and the two-dimensional contour masks for the top and bottom slices of the patch. The outputs were the organ masks for the three middle slices. The performance was evaluated on both dice scores and distance-to-agreement (DTA) values.Results
The deep contour interpolation model achieved a dice score of 0.95±0.05 and a mean DTA value of 1.09±2.30 mm , averaged on 3167 testing cases of all 15 organs. In a comparison, the results by the conventional SBI method were 0.94±0.08 and 1.50±3.63 mm , respectively. For the difficult cases, the dice score and DTA value were 0.91±0.09 and 1.68±2.28 mm by the deep interpolator, compared with 0.86±0.13 and 3.43±5.89 mm by SBI. The t-test results confirmed that the performance improvements were statistically significant ( p<0.05 ) for all cases in dice scores and for small organs and difficult cases in DTA values. Ablation studies were also performed.Conclusions
A deep learning method was developed to enhance the process of contour interpolation. It could be useful for expediting the tasks of manual segmentation of organs and structures in the medical images.Item Embargo Deep Learning-based Onboard Image Guidance and Dose Verification for Radiation Therapy(2024) Jiang, ZhuoranOnboard image guidance and dose verification play important roles in radiation therapy, which enables precise targeting and accurate dose delivery. However, clinical utilities of these advanced techniques are limited by their degraded image quality due to the under-sampling. Specifically, four-dimensional cone-beam computed tomography (4D-CBCT) is a valuable tool to provide onboard respiration-resolved images for moving targets, but its image quality is degraded by the intra-phase sparse sampling due to the clinical constraints on acquisition time and imaging dose. Radiation-induced acoustic (RA) imaging and prompt gamma (PG) imaging are two promising methods to reconstruct 3D dose deposition noninvasively in real-time during the treatment, but their images are severely distorted by the single-view measurements. Essentially, reconstructing images from under-sampled acquisition is an ill-conditioned inverse problem. Our previous studies have demonstrated the effectiveness of deep learning in restoring volumetric information from sparse and limited-angle measurements. In this project, we would like to further explore the applications of deep learning (1) in providing high-quality and efficient onboard image guidance before dose delivery for target localization, and (2) in realizing precise quantitative 3D dosimetry during delivery for dose verification in the radiotherapy. The first aim is achieved by reconstructing high-quality 4D-CBCT from fast (1-minute) free-breathing scans. We proposed a feature-compensated deformable convolutional network (FeaCo-DCN) to perform inter-phase compensation in the latent feature space, which has not been explored by previous studies. In FeaCo-DCN, encoding networks extract features from each phase, and then, features of other phases are deformed to those of the target phase via deformable convolutional networks. Finally, a decoding network combines and decodes features from all phases to yield high-quality images of the target phase. The proposed FeaCo-DCN was evaluated using lung cancer patient data. Key findings include (1) FeaCo-DCN generated high-quality images with accurate and clear structures for a fast 4D-CBCT scan; (2) 4D-CBCT images reconstructed by FeaCo-DCN achieved 3D tumor localization accuracy within 2.5 mm; (3) image reconstruction is nearly real-time; and (4) FeaCo-DCN achieved superior performance by all metrics compared to the top-ranked techniques in the AAPM SPARE Challenge. In conclusion, the proposed FeaCo-DCN is effective and efficient in reconstructing 4D-CBCT while reducing about 90% of the scanning time, which can be highly valuable for moving target localization in image-guided radiotherapy. The second aim is achieved by reconstructing accurate dose maps from multi-modality radiation-induced signals such as (a) acoustic waves and (b) prompt gammas. For protoacoustic (PA) imaging, we developed a deep learning-based method to address the limited-view issue in the PA reconstruction. A deep cascaded convolutional neural network (DC-CNN) was proposed to reconstruct 3D high-quality radiation-induced pressures using PA signals detected by a matrix array, and then derive precise 3D dosimetry from pressures for dose verification in proton therapy. To validate its performance, we collected 81 prostate cancer patients’ proton therapy treatment plans. The proton-acoustic simulation was performed using the open-source k-wave package. A matrix ultrasound array was simulated near the perineum to acquire radiofrequency (RF) signals during dose delivery. For realistic acoustic simulations, tissue heterogeneity and attenuation were considered, and Gaussian white noise was added to the acquired RF signals. The proposed DC-CNN was trained on 204 samples from 69 patients and tested on 26 samples from 12 other patients. Results demonstrated that the proposed method considerably improved the limited-view proton-acoustic image quality, reconstructing pressures with clear and accurate structures and deriving doses with a high agreement with the ground truth. Quantitatively, the pressure accuracy achieved an RMSE of 0.061, and the dose accuracy achieved an RMSE of 0.044, GI (3%/3mm) of 93.71%, and 90%-isodose line Dice of 0.922. The proposed method demonstrates the feasibility of achieving high-quality quantitative 3D dosimetry in proton-acoustic imaging using a matrix array, which potentially enables the online 3D dose verification for prostate proton therapy. Besides the limited-angle acquisition challenge in acoustic imaging, we also developed a general deep inception convolutional neural network (GDI-CNN) to address the low SNR challenge in the few-frame-averaged acoustic signals. The network employs convolutions with multiple dilations in each inception block, allowing it to encode and decode signal features with varying temporal characteristics. This design generalizes GDI-CNN to denoise acoustic signals resulting from different radiation sources. The performance of the proposed method was evaluated using experimental data of X-ray-induced acoustic and protoacoustic signals both qualitatively and quantitatively. Results demonstrated the effectiveness of GDI-CNN: it achieved X-ray-induced acoustic image quality comparable to 750-frame-averaged results using only 10-frame-averaged measurements, reducing the imaging dose of X-ray-acoustic computed tomography (XACT) by 98.7%; it realized proton range accuracy parallel to 1500-frame-averaged results using only 20-frame-averaged measurements, improving the range verification frequency in proton therapy from 0.5Hz to 37.5Hz. Compared to lowpass filter-based denoising, the proposed method demonstrated considerably lower mean-squared-errors, higher peak-SNR, and higher structural similarities with respect to the corresponding high-frame-averaged measurements. The proposed deep learning-based denoising framework is a generalized method for few-frame-averaged acoustic signal denoising, which significantly improves the RA imaging’s clinical utilities for low-dose imaging and real-time therapy monitoring. For prompt gamma imaging, we proposed a two-tier deep learning-based method with a novel weighted axis-projection loss to generate precise 3D PG images to achieve accurate proton range verification. The proposed method consists of two models: first, a localization model is trained to define a region-of-interest (ROI) in the distorted back-projected PG image that contains the proton pencil beam; second, an enhancement model is trained to restore the true PG emissions with additional attention on the ROI. In this study, we simulated 54 proton pencil beams delivered at clinical dose rates in a tissue-equivalent phantom using Monte-Carlo (MC). PG detection with a CC was simulated using the MC-Plus-Detector-Effects model. Images were reconstructed using the kernel-weighted-back-projection algorithm, and were then enhanced by the proposed method. The method effectively restored the 3D shape of the PG images with the proton pencil beam range clearly visible in all testing cases. Range errors were within 2 pixels (4 mm) in all directions in most cases at a higher dose level. The proposed method is fully automatic, and the enhancement takes only ~0.26 seconds. This preliminary study demonstrated the feasibility of the proposed method to generate accurate 3D PG images using a deep learning framework, providing a powerful tool for high-precision in vivo range verification of proton therapy. These applications can significantly reduce the uncertainties in patient positioning and dose delivery in radiotherapy, which improves treatment precision and outcomes.
Item Open Access Fluence Map Prediction Using Deep Learning Models - Direct Plan Generation for Pancreas Stereotactic Body Radiation Therapy.(Frontiers in artificial intelligence, 2020-01) Wang, Wentao; Sheng, Yang; Wang, Chunhao; Zhang, Jiahan; Li, Xinyi; Palta, Manisha; Czito, Brian; Willett, Christopher G; Wu, Qiuwen; Ge, Yaorong; Yin, Fang-Fang; Wu, Q JackiePurpose: Treatment planning for pancreas stereotactic body radiation therapy (SBRT) is a difficult and time-consuming task. In this study, we aim to develop a novel deep learning framework to generate clinical-quality plans by direct prediction of fluence maps from patient anatomy using convolutional neural networks (CNNs). Materials and Methods: Our proposed framework utilizes two CNNs to predict intensity-modulated radiation therapy fluence maps and generate deliverable plans: (1) Field-dose CNN predicts field-dose distributions in the region of interest using planning images and structure contours; (2) a fluence map CNN predicts the final fluence map per beam using the predicted field dose projected onto the beam's eye view. The predicted fluence maps were subsequently imported into the treatment planning system for leaf sequencing and final dose calculation (model-predicted plans). One hundred patients previously treated with pancreas SBRT were included in this retrospective study, and they were split into 85 training cases and 15 test cases. For each network, 10% of training data were randomly selected for model validation. Nine-beam benchmark plans with standardized target prescription and organ-at-risk constraints were planned by experienced clinical physicists and used as the gold standard to train the model. Model-predicted plans were compared with benchmark plans in terms of dosimetric endpoints, fluence map deliverability, and total monitor units. Results: The average time for fluence-map prediction per patient was 7.1 s. Comparing model-predicted plans with benchmark plans, target mean dose, maximum dose (0.1 cc), and D95% absolute differences in percentages of prescription were 0.1, 3.9, and 2.1%, respectively; organ-at-risk mean dose and maximum dose (0.1 cc) absolute differences were 0.2 and 4.4%, respectively. The predicted plans had fluence map gamma indices (97.69 ± 0.96% vs. 98.14 ± 0.74%) and total monitor units (2,122 ± 281 vs. 2,265 ± 373) that were comparable to the benchmark plans. Conclusions: We develop a novel deep learning framework for pancreas SBRT planning, which predicts a fluence map for each beam and can, therefore, bypass the lengthy inverse optimization process. The proposed framework could potentially change the paradigm of treatment planning by harnessing the power of deep learning to generate clinically deliverable plans in seconds.Item Open Access Material decomposition from photon-counting CT using a convolutional neural network and energy-integrating CT training labels.(Physics in medicine and biology, 2022-06-29) Nadkarni, Rohan; Allphin, Alex; Clark, Darin P; Badea, Cristian TObjective
Photon-counting CT (PCCT) has better dose efficiency and spectral resolution than energy-integrating CT, which is advantageous for material decomposition. Unfortunately, the accuracy of PCCT-based material decomposition is limited due to spectral distortions in the photon-counting detector (PCD).Approach
In this work, we demonstrate a deep learning (DL) approach that compensates for spectral distortions in the PCD and improves accuracy in material decomposition by using decomposition maps provided by high-dose multi-energy-integrating detector (EID) data as training labels. We use a 3D U-net architecture and compare networks with PCD filtered backprojection (FBP) reconstruction (FBP2Decomp), PCD iterative reconstruction (Iter2Decomp), and PCD decomposition (Decomp2Decomp) as the input.Main results
We found that our Iter2Decomp approach performs best, but DL outperforms matrix inversion decomposition regardless of the input. Compared to PCD matrix inversion decomposition, Iter2Decomp gives 27.50% lower root mean squared error (RMSE) in the iodine (I) map and 59.87% lower RMSE in the photoelectric effect (PE) map. In addition, it increases the structural similarity (SSIM) by 1.92%, 6.05%, and 9.33% in the I, Compton scattering (CS), and PE maps, respectively. When taking measurements from iodine and calcium vials, Iter2Decomp provides excellent agreement with multi-EID decomposition. One limitation is some blurring caused by our DL approach, with a decrease from 1.98 line pairs/mm at 50% modulation transfer function (MTF) with PCD matrix inversion decomposition to 1.75 line pairs/mm at 50% MTF when using Iter2Decomp.Significance
Overall, this work demonstrates that our DL approach with high-dose multi-EID derived decomposition labels is effective at generating more accurate material maps from PCD data. More accurate preclinical spectral PCCT imaging such as this could serve for developing nanoparticles that show promise in the field of theranostics (therapy and diagnostics).Item Open Access Using computer vision on herbarium specimen images to discriminate among closely related horsetails (Equisetum).(Applications in plant sciences, 2020-06) Pryer, KM; Tomasi, C; Wang, X; Meineke, EK; Windham, MDPremise:Equisetum is a distinctive vascular plant genus with 15 extant species worldwide. Species identification is complicated by morphological plasticity and frequent hybridization events, leading to a disproportionately high number of misidentified specimens. These may be correctly identified by applying appropriate computer vision tools. Methods:We hypothesize that aerial stem nodes can provide enough information to distinguish among Equisetum hyemale, E. laevigatum, and E . ×ferrissii, the latter being a hybrid between the other two. An object detector was trained to find nodes on a given image and to distinguish E. hyemale nodes from those of E. laevigatum. A classifier then took statistics from the detection results and classified the given image into one of the three taxa. Both detector and classifier were trained and tested on expert manually annotated images. Results:In our exploratory test set of 30 images, our detector/classifier combination identified all 10 E. laevigatum images correctly, as well as nine out of 10 E. hyemale images, and eight out of 10 E. ×ferrissii images, for a 90% classification accuracy. Discussion:Our results support the notion that computer vision may help with the identification of herbarium specimens once enough manual annotations become available.