Browsing by Subject "Image segmentation"
Results Per Page
Sort Options
Item Open Access ComPRePS: An Automated Cloud-based Image Analysis tool to democratize AI in Digital Pathology.(bioRxiv, 2024-04-05) Mimar, Sayat; Paul, Anindya S; Lucarelli, Nicholas; Border, Samuel; Santo, Briana A; Naglah, Ahmed; Barisoni, Laura; Hodgin, Jeffrey; Rosenberg, Avi Z; Clapp, William; Sarder, Pinaki; Kidney Precision Medicine ProjectArtificial intelligence (AI) has extensive applications in a wide range of disciplines including healthcare and clinical practice. Advances in high-resolution whole-slide brightfield microscopy allow for the digitization of histologically stained tissue sections, producing gigapixel-scale whole-slide images (WSI). The significant improvement in computing and revolution of deep neural network (DNN)-based AI technologies over the last decade allow us to integrate massively parallelized computational power, cutting-edge AI algorithms, and big data storage, management, and processing. Applied to WSIs, AI has created opportunities for improved disease diagnostics and prognostics with the ultimate goal of enhancing precision medicine and resulting patient care. The National Institutes of Health (NIH) has recognized the importance of developing standardized principles for data management and discovery for the advancement of science and proposed the Findable, Accessible, Interoperable, Reusable, (FAIR) Data Principles1 with the goal of building a modernized biomedical data resource ecosystem to establish collaborative research communities. In line with this mission and to democratize AI-based image analysis in digital pathology, we propose ComPRePS: an end-to-end automated Computational Renal Pathology Suite which combines massive scalability, on-demand cloud computing, and an easy-to-use web-based user interface for data upload, storage, management, slide-level visualization, and domain expert interaction. Moreover, our platform is equipped with both in-house and collaborator developed sophisticated AI algorithms in the back-end server for image analysis to identify clinically relevant micro-anatomic functional tissue units (FTU) and to extract image features.Item Open Access Consensus Segmentation for Positron Emission Tomography: Development and Applications in Radiation Therapy(2013) McGurk, RossThe use of positron emission tomography (PET) in radiation therapy has continued to grow, especially since the development of combined computed tomography (CT) and PET imaging system in the early 1990s. Today, the biggest use of PET-CT is in oncology, where a glucose analog radiotracer is rapidly incorporated into the metabolic pathways of a variety of cancers. Images representing the in-vivo distribution of this radiotracer are used for the staging, delineation and assessment of treatment response of patients undergoing chemotherapy or radiation therapy. While PET offers the ability to provide functional information, the imaging quality of PET is adversely affected by its lower spatial resolution. It also has unfavorable image noise characteristics due to radiation dose concerns and patient compliance. These factors result in PET images having less detail and lower signal-to-noise (SNR) properties compared to images produced by CT. This complicates the use of PET within many areas of radiation oncology, but particularly the delineation of targets for radiation therapy and the assessment of patient response to therapy. The development of segmentation methods that can provide accurate object identification in PET images under a variety of imaging conditions has been a goal of the imaging community for years. The goal of this thesis are to: (1) investigate the effect of filtering on segmentation methods; (2) investigate whether combining individual segmentation methods can improve segmentation accuracy; (3) investigate whether the consensus volumes can be useful in aiding physicians of different experience in defining gross tumor volumes (GTV) for head-and-neck cancer patients; and (4) to investigate whether consensus volumes can be useful in assessing early treatment response in head-and-neck cancer patients.
For this dissertation work, standard spherical objects of volumes ranging from 1.15 cc to 37 cc and two irregularly shaped objects of volume 16 cc and 32 cc formed by deforming high density plastic bottles were placed in a standardized image quality phantom and imaged at two contrasts (4:1 or 8:1 for spheres, and 4.5:1 and 9:1 for irregular) and three scan durations (1, 2 and 5 minutes). For the work carried out into the comparison of images filters, Gaussian and bilateral filters matched to produce similar image signal to noise (SNR) in background regions were applied to raw unfiltered images. Objects were segmented using thresholding at 40% of the maximum intensity within a region-of-interest (ROI), an adaptive thresholding method which accounts for the signal of the object as well as background, k-means clustering, and a seeded region-growing method adapted from the literature. Quality of the segmentations was assessed using the Dice Similarity Coefficient (DSC) and symmetric mean absolute surface distance (SMASD). Further, models describing how DSC varies with object size, contrast, scan duration, filter choice and segmentation method were fitted using generalized estimating equations (GEEs) and standard regression for comparison. GEEs accounted for the bounded, correlated and heteroscedastic nature of the DSC metric. Our analysis revealed that object size had the largest effect on DSC for spheres, followed by contrast and scan duration. In addition, compared to filtering images with a 5 mm full-width at half maximum (FWHM) Gaussian filter, a 7 mm bilateral filter with moderate pre-smoothing (3 mm Gaussian (G3B7)) produced significant improvements in 3 out of the 4 segmentation methods for spheres. For the irregular objects, time had the biggest effect on DSC values, followed by contrast.
For the study of applying consensus methods to PET segmentation, an additional gradient based method was included into the collection individual segmentation methods used for the filtering study. Objects in images acquired for 5 minute scan durations were filtered with a 5 mm FWHM Gaussian before being segmented by all individual methods. Two approaches of creating a volume reflecting the agreement between the individual methods were investigated. First, a simple majority voting scheme (MJV), where individual voxels segmented by three or more of the individual methods are included in the consensus volume, and second, the Simultaneous Truth and Performance Level Estimation (STAPLE) method which is a maximum likelihood methodology previously presented in the literature but never applied to PET segmentation. Improvements in accuracy to match or exceed the best performing individual method were observed, and importantly, both consensus methods provided robustness against poorly performing individual methods. In fact, the distributions of DSC and SMASD values for the MJV and STAPLE closely match the distribution that would result if the best individual method result were selected for all objects (the best individual method varies by objects). Given that the best individual method is dependent on object type, size, contrast, and image noise and the best individual method is not able to be known before segmentation, consensus methods offer a marked improvement over the current standard of using just one of the individual segmentation methods used in this dissertation.
To explore the potential application of consensus volumes to radiation therapy, the MJV consensus method was used to produce GTVs in a population of head and neck cancer patients. This GTV and one created using simple 40% thresholding were then available to be used as a guidance volume for an attending head and neck radiation oncologist and a resident who had completed their head and neck rotation. The task for each physician was to manually delineate GTVs using the CT and PET images. Each patient was contoured three times by each physician- without guidance and with guidance using either the MJV consensus volume or 40% thresholding. Differences in GTV volumes between physicians were not significant, nor were differences between the GTV volumes regardless of the guidance volume available to the physicians. However, on average, 15-20% of the provided guidance volume lay outside the final physician-defined contour.
In the final study, the MJV and STAPLE consensus volumes were used to extract maximum, peak and mean SUV measurements in two baseline PET scans and one PET scan taken during patients' prescribed radiation therapy treatments. Mean SUV values derived from consensus volumes showed smaller variability compared to maximum SUV values. Baseline and intratreatment variability was assessed using a Bland-Altman analysis which showed that baseline variability in SUV was lower than intratreatment changes in SUV.
The techniques developed and reported in this thesis demonstrate how filter choice affects segmentation accuracy, how the use of GEEs more appropriately account for the properties of a common segmentation quality metric, and how consensus volumes not only provide an accuracy on par with the single best performing individual method in a given activity distribution, but also exhibit a robustness against variable performance of individual segmentation methods that make up the consensus volume. These properties make the use of consensus volumes appealing for a variety of tasks in radiation oncology.
Item Open Access Deriving Lung Ventilation MAP Directly from Auto Segmented CT Images Using Deep Convolutional Neural Network (CNN)(2022) Li, NanLung cancer has been the most commonly occurring cancer (J. Ferlay, 2018), with the highest fatality rate worldwide. Lung cancer patients undergoing radiation therapy typically experience many side effects. In order to reduce the adverse effects, lung function (ventilation state)-guided radiation therapy has been highly recommended. “Functional Lung Avoidance Radiation Therapy” (FLA-RT) can selectively avoid high-dose irradiation to the well-functioning region of the lungs and reduce lung injury during RT (Azza Ahmed Khalil, 2021). FLA-RT, however, needs information on lung function for the treatment process. The conventional techniques that acquire lung function map (S. W. Harders, 2013) include 99mTc SPECT technique (Suga, 2002), 99mTc MRI technique (LindsayMathew, 2012), 68Ga PET technique (Jason Callahan, 2013). Nevertheless, these techniques have the following issues: high cost, labor-intensive in the preparation process, and low accessibility for the radiation oncology departments.This research is aimed to investigate whether the lung function images could be generated from routine planning CT images using CNN. This study will also develop an image segmentation method to automatically and accurately segment lung volume for the chest CT images. This study retrospectively analyzed 99mTc DTPA SPECT scans of 21 cases. These were open-source data from "VAMPIRE (Ventilation Archive for Medical Pulmonary Image Registration Evaluation)" established by John Kipritidis, Henry C. Woodruff, and Paul J. Keall from the Radiation Physics Laboratory of the University of Sydney in Australia. The sizes for CT images and the reference mask images were 512 ⅹ 512 matrices with the pixel size of 2.5 ⅹ 2.5 mm2 and 3 mm slice thickness. The SPECT images were reconstructed in 512 ⅹ 512 matrices with 2.5 ⅹ 2.5 ⅹ 2.5 mm3 voxel size. CT, reference mask, and SPECT images are all in ". mha" data format for each study case. This study contains two major components. First, a deep-learning model was developed to auto-segment the lung region from the CT images. Second, another deep-learning model was developed to use the segmented lung CT image as input to predict lung ventilation function map. In order to accomplish the first task of this study, we used the CT images as the “network input” and the reference mask images as the "network-output." We then trained them with a designated 2D U-shape backbone network and successfully generated the first model. For testing the model performance, Pixel Accuracy, Pixel Recall, Pixel Precision, and Intersection of Union (IOU) were used as assessment criteria to evaluate the quality of model-generated lung masks based on the ground truth masks. In order to accomplish the second task, we used the segmented lung CT images as the “network input” and SPECT images as the "network-output." We then trained them with another designated 3D U-shape backbone network and successfully generated the second model. For testing the performance of the second model, the correlation coefficient (Spearman's coefficient) (Piantadosi) was used as assessment criteria to evaluate the correlation between the model-generated lung function images and the ground truth SPECT images. In order to achieve the optimal outcome, this study applied parallel studies that compared the influence of different training strategies on the outcome (see Chapter 3.3.2). The different train strategies include two aspects for DL Model 1 and four aspects for DL Model 2. Training the designed network with three-channels data as input provided the best results for image segmentation. For test case 1, the Pixel Accuracy is 0.935±0.033, the Pixel recall is 0.942±0.029, Pixel Precision is 0.942±0.032, and IoU is 0.891±0.042. For test case 2, the Pixel Accuracy is 0.950±0.024, the Pixel recall is 0.961±0.015, Pixel Precision is 0.943±0.028, and IoU is 0.909±0.036. For “deriving lung function images,” training the designed network using the ground truth mask to segment the chest CT with [-1,1] normalization and 32 ⅹ 32 ⅹ 64 training patch size as inputs provided the best results. The Spearman's correlation coefficients for cases 1 and 2 got 0.8689±0.038 and 0.8716±0.036, respectively. The preliminary study using the designed U-shape backbone convolutional neural networks (CNNs) achieved satisfactory auto-segmentation results and derived promising results of the lung function map. It indicates the feasibility of directly deriving the lung ventilation state (SPECT-like images) from CT images. The CNN-derived “SPECT-like” lung functional images might be used to reference FLA-RT.