Browsing by Subject "Computed tomography"
Results Per Page
Sort Options
Item Open Access A Strategy for Matching Noise Magnitude and Texture Across CT Scanners of Different Makes and Models(2012) Solomon, Justin BennionPurpose: The fleet of x-ray computed tomography systems used by large medical institutions is often comprised of scanners from various manufacturers. An inhomogeneous fleet of scanners could lead to inconsistent image quality due to the different features and technologies implemented by each manufacturer. Specifically, image noise could be highly variable across scanners from different manufacturers. To partly address this problem, we have performed two studies to characterize noise magnitude and texture on two scanners: one from GE Healthcare and one from Siemens Healthcare. The purpose of the first study was to evaluate how noise magnitude changes as a function of image quality indicators (e.g., "noise index" and "quality reference mAs") when automatic tube current modulation is used. The purpose of the second study was to compare and match reconstruction kernels from each vendor with respect to noise texture.
Methods: The first study was performed by imaging anthropomorphic phantoms on each scanner using a clinical range of scan settings and image quality indicator values. Noise magnitude was measured at various anatomical locations using an image subtraction technique. Noise was then modeled as a function of image quality indicators and other scan parameters that were found to significantly affect the noise-image quality indicator relationship.
The second study was performed by imaging the American College of Radiology CT accreditation phantom with a comparable acquisition protocol on each scanner. Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. We then estimated the noise power spectrum (NPS) of each image set and performed a systematic kernel-by-kernel comparison of spectra using the peak frequency difference (PFD) and the root mean square error (RMSE) as metrics of similarity. Kernels that minimized the PFD and RMSE were paired.
Results: From the fist study, on the GE scanner, noise magnitude increased linearly with noise index. The slope of that line was affected by changing the anatomy of interest, kVp, reconstruction algorithm, and convolution kernel. The noise-noise index relationship was independent of phantom size, slice thickness, pitch, field of view, and beam width. On the Siemens scanner, noise magnitude decreased non-linearly with increasing quality reference effective mAs, slice thickness, and peak tube voltage. The noise-quality reference effective mAs relationship also depended on anatomy of interest, phantom size, age selection, and reconstruction algorithm but was independent of pitch, field of view, and detector configuration. From the second study, the RMSE between the NPS of GE and Siemens kernels varied from 0.02 to 0.74 mm. The GE kernels "Soft", "Standard", "Chest", and "Lung" closely matched the Siemens kernels "B35f", "B43f", "B46f", and "B80f" (RMSE<0.07 mm, PFD<0.02 mm-1). The GE "Bone", "Bone+", and "Edge" kernels all matched most closely to Siemens "B75f" kernel but with sizeable RMSE and PFD values up to 0.62 mm and 0.5 mm-1 respectively. These sizeable RMSE and PFD values corresponded to visually perceivable differences in the noise texture of the images.
Conclusions: From the first study, we established how noise changes with changing image quality indicators across a clinically relevant range of imaging parameters. This will allow us target equal noise levels across manufacturers. From the second study, we concluded that it is possible to use the NPS to quantitatively compare noise texture across CT systems. We found that many commonly used GE and Siemens kernels have similar texture. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner. This result will aid in choosing appropriate corresponding kernels for each scanner when writing protocols. Taken together, the results from these two studies will allow us to write protocols that result in images with more consistent noise properties.
Item Open Access Accuracy and Patient Dose in Neutron Stimulated Emission Computed Tomography for Diagnosis of Liver Iron Overload: Simulations in GEANT4(2007-08-13) Kapadia, AnujNeutron stimulated emission computed tomography (NSECT) is being proposed as an experimental technique to diagnose iron overload in patients. Proof-of-concept experiments have suggested that NSECT may have potential to make a non-invasive diagnosis of iron overload in a clinical system. The technique's sensitivity to high concentrations of iron combined with tomographic acquisition ability gives it a unique advantage over other competing modalities. While early experiments have demonstrated the efficacy of detecting samples with high concentrations of iron, a tomography application for patient diagnosis has never been tested. As with any other tomography system, the performance of NSECT will depend greatly on the acquisition parameters that are used to scan the patient. In order to determine the best acquisition geometry for a clinical system, it is important to evaluate and understand the effects of varying each individual acquisition parameter on the accuracy of the reconstructed image. This research work proposes to use Monte-Carlo simulations to optimize a clinical NSECT system for iron overload diagnosis.Simulations of two NSECT systems have been designed in GEANT4, a spectroscopy system to detect uniform concentrations of iron in the liver, and a tomography system to detect non-uniform iron overload. Each system has been used to scan simulated samples of both disease models in humans to determine the best scanning strategy for each. The optimal scanning strategy is defined as the combination of parameters that provides maximum accuracy with minimum radiation dose. Evaluation of accuracy is performed through ROC analysis of the reconstructed spectrums and images. For the spectroscopy system, the optimal acquisition geometry is defined in terms of the number of neutrons required to detect a clinically relevant concentration of iron. For the tomography system, the optimal scanning strategy is defined in terms of the number of neutrons and the number of spatial and angular translation steps used during acquisition. Patient dose for each simulated system is calculated by measuring the energy deposited by the neutron beam in the liver and surrounding body tissue. Simulation results indicate that both scanning systems can detect wet iron concentrations of 5 mg/g or higher. Spectroscopic scanning with sufficient accuracy is possible with 1 million neutrons per scan, corresponding to a patient dose of 0.02 mSv. Tomographic scanning requires 8 angles that sample the image matrix at 1 cm projection intervals with 4 million neutrons per projection, which corresponds to a total body dose of 0.56 mSv. The research performed for this dissertation has two important outcomes. First, it demonstrates that NSECT has the clinical potential for iron overload diagnosis in patients. Second, it provides a validated simulation of the NSECT system which can be used to guide future development and experimental implementation of the technique.Item Open Access Advanced Techniques for Image Quality Assessment of Modern X-ray Computed Tomography Systems(2016) Solomon, Justin BennionX-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Item Open Access Development and Application of Patient-Informed Metrics of Image Quality in CT(2020) Smith, Taylor BruntonThe purpose of this dissertation was to develop methods of measuring patient-specific image quality in computed tomography. The methods developed in this dissertation enable noise power spectrum, low contrast resolution, and ultimately a detectability index to be measured in a patient-specific manner. The project is divided into three part: 1) demonstrating the utility of currently developed patient-specific measures of image quality, 2) developing a method to estimate noise power spectrum and low contrast task transfer function from patient images, 3) and applying the extended metrology to the calculation of a patient-specific and task-specific detectability index of the future.In part 1, (chapters 2 and 3) the value of patient-specific image quality is demonstrated in two ways. First, patient-specific measures of noise magnitude and high-contrast resolution were deployed on a broad clinical dataset of chest and abdomen-pelvis exams. Image quality and dose were measured for 87,629 cases across 97 medical facilities, and variability in each outcome are reported. Such measurements of variability would be impossible in a phantom-derived image quality paradigm. Secondly, patient-specific measures of noise magnitude and high-contrast resolution were combined with a phantom-derived noise power spectrum to yield a detectability index. The hybrid (patient, and phantom-derived) detectability index was measured and retrospectively compared to the results of a detection observer study. The results show that the measured hybrid detectability index is shown to be correlated with human observer detection performance, further demonstrating the value of measuring patient-specific image quality. In part 2, (chapters 4 and 5) two image quality aspects are extended from a phantom-derived to a patient-specific paradigm. In chapter 4, a method to measure noise power spectrum from patient images is developed and validated using virtual imaging trial and physical phantom data. The method is applied to unseen clinical cases to demonstrate its feasibility, and the method’s sensitivity to expected trends across image reconstructions. Since the method relies on a sufficient area within the patient’s liver to make a measurement, the sensitivity of measurement accuracy of the method region size is assessed. Results show that the measurements can be accurate with as few as 106 included pixels, and that measurements are sensitive to ground truth differences in reconstruction algorithm. In chapter 5, a method to measure low contrast resolution from patient images is developed and validated using low contrast insert phantom scans. The method uses a support vector machine to learn the connection between the patient-specific noise power spectrum measured in chapter 4 and the low contrast task transfer function. The estimation method is compared to clinical alternative and results show that it is more accurate on the basis of RMSE for iterative reconstructions (especially high strength reconstructions). In part 3, (chapter 6 and appendix section 8.1) the developed patient-specific image quality metrology are applied to calculated fully patient-specific detectability index. Here, patient-specific image quality measures are re-applied to the detectability index calculations from chapter 3, converting the calculations from a hybrid method to a fully patient-specific method. To do so, the patient-specific noise power spectrum estimates from chapter 4 were combined with the patient-specific low contrast task transfer functions from chapter 5 to inform the detectability index calculations. The purpose of this chapter was to show the positive impact of measuring a task-based measure of image quality in a fully patient-specific paradigm. The results show that the fully patient-specific detectability index show a statistically significant improvement in its relation with human detection accuracy over the hybrid measurements. This section also served as an indirect validation methodologies in chapters 4 and 5. Finally, all patient-specific measures are deployed over a variety of clinical cases to demonstrate feasibility of using the methods to monitor image quality. In conclusion, this dissertation developed methods to assess task based and task generic image quality directly from patient images, and demonstrated the utility and value of patient-specific image quality assessment.
Item Open Access Development of a Method to Detect and Quantify CT Motion Artifacts: Feasibility Study(2022) Khandekar, MadhuraArtifacts are known to reduce the quality of CT images and can affect statistical analysis and quantitative utility of those images. Motion artifact is a leading type of CT artifact, due to either voluntary or involuntary (respiratory and cardiac movements) causes. Currently, such artifacts, if present, are not quantified and monitored, nor are their dependencies on CT acquisition settings known. As a first step to address this gap, the aim of this study was to develop a neural network to detect and quantify motion artifacts in CT images. Training data were drawn from three sources and the pixels containing motion were segmented (Seg3D, University of Utah) and the segmentation masks used as the ground truth labels. A convolutional neural network (u-net) was trained to identify pixels containing motion. The model performance was assessed by correlating the percentage of voxels labeled as having motion in each slice of the pre-allocated testing data for the ground-truth and predicted segmentation masks, yielding a correlation coefficient of r = 0.43, as well as constructing ROC curves. A series-wise ROC curve had AUC = 0.94, and a slice-wise ROC curve had AUC = 0.80. The correlation coefficient and AUCs are expected to improve as more training data is added. This network has potential to be a useful clinical tool, enabling quality tracking systems to detect and quantify the presence of artifacts in the context of CT quality control.
Item Open Access Development of an Integrated SPECT-CmT Dedicated Breast Imaging System Incorporating Novel Data Acquisition and Patient Bed Designs(2010) Crotty, DominicThis thesis research builds upon prior work that developed separate SPECT and CT (computed mammotomography, or breast CT) devices that were independently capable of imaging an uncompressed breast in 3D space. To further develop the system as a clinically viable device, it was necessary to integrate the separate imaging systems onto a single gantry, and to simultaneously design a patient-friendly bed that could routinely and effectively position the patient during dual-modality imaging of her uncompressed breast in the system's common field of view. This thesis describes this process and also investigates practical challenges associated with dedicated breast imaging of a prone patient using the integrated SPECT-CT device.
We initially characterized the practicability of implementing the novel x-ray beam ultra-thick K-edge filtration scheme designed for routine use with the breast CT system. Extensive computer simulations and physical measurements were performed to characterize the x-ray beam produced using K-edge filtration with cerium and to compare it to beams produced using other filtration methods and materials. The advantages of using this heavily filtered x-ray beam for uncompressed breast CT imaging were then further evaluated by measuring the dose absorbed by an uncompressed cadaver breast during the course of a routine tomographic scan. It was found that the breast CT device is indeed capable of imaging uncompressed breasts at dose levels below that of the maximum utilized for dual-view screening mammography.
To prepare the separate SPECT and CT systems for integration onto a single platform, the cross contamination of the image of one modality by primary and scattered photons of the complementary modality was quantified. It was found that contamination levels of the emission (SPECT) image by the x-ray transmission source were generally far less than 2% when using photopeak energy windows up to ±8%. In addition, while there was some quantifiable evidence of a variation in the transmission image in response to the presence of 99mTc photons in the patient, the effect of primary and scattered 99mTc photons on the visibility of 5 mm acrylic photons in a low contrast x-ray transmission environment was negligible.
A novel, tiered, stainless steel patient bed was then designed to allow dual-modality imaging using the integrated SPECT-CT system. The performance of the hybrid SPECT-CT system was evaluated during early stage dual-modality patient imaging trials with particular emphasis placed on the performance of the patient bed. The bed was successful in its primary task of enabling dual-modality imaging of a patient's breast in the common field of view, but practical challenges to more effective patient imaging were identified as well as some novel solutions to these challenges.
In the final section of the thesis research, the feasibility of using two of these solutions was investigated with a view to imaging more of the patient's posterior breast volume. Limited angle tomographic trajectories and trajectories that involve raising or lowering the patient bed in mid tomographic acquisition were initially investigated using various geometric phantoms. A very low contrast imaging task was then tested using an observer study to quantify the effect of these trajectories on the ability of observers to maintain visibility of small geometric objects.
This initial integrated SPECT-CT imaging system has demonstrated its ability to successfully perform low dose, dual-modality imaging of the uncompressed breast. Challenges and solutions have been identified here that will make future SPECT-CT designs even more powerful and a clinically relevant technique for molecular imaging of the breast.
Item Open Access Development of Radiochromic Film for Spatially Quantitative Dosimetric Analysis of Indirect Ionizing Radiation Fields(2010) Brady, Samuel LorenTraditional dosimetric devices are inherently point dose dosimeters (PDDs) and can only measure the magnitude of the radiation exposure; hence, they are one-dimensional (1D). To measure the magnitude and spatial location of dose within a volume either several PDDs must be used at one time, or one PDD must be translated from point-to-point. Using PDDs for spatially distributed, two-dimensional (2D), dosimetry is laborious, time consuming, limited in spatial resolution, susceptible to positioning errors, and the currently accepted approach to measuring dose distribution in 2D. This work seeks to expand the current limits of indirectly ionizing radiation dosimetry by using radiochromic film (RCF) for a high-resolution, accurate dosimetry system. Using RCF will extend the current field of radiation dosimetry to spatially quantitative 2D and three-dimensional (3D) measurements.
This work was generalized into two aims. The first aim was the development of the RCF dosimetry system; it was accomplished by characterizing the film and the readout devices and developing a method to calibrate film response for absolute dose measurements. The second aim was to apply the RCF dosimetry system to three areas of dosimetry that were inherently volumetric and could benefit from multiple dimensional (2D or 3D) dose analysis. These areas were representative of a broad range of radiation energy levels and were: low-mammography, intermediate-computed tomography (CT), and high-radiobiologcal small animal irradiation and cancer patient treatment verification. The application of a single dosimeter over a broad range of energy levels is currently unavailable for most traditional dosimeters, and thus, was used to demonstrate the robustness and flexibility of the RCF dosimetry system.
Two types of RCF were characterized for this work: EBT and XRQA film. Both films were investigated for: radiation interaction with film structure; light interaction with film structure for optimal film readout (densitometry) sensitivity; range of absorbed dose measurements; dependence of film dose measurement response as a function of changing radiation energy; fractionation and dose rate effects on film measurement response; film response sensitivity to ambient factors; and stability of measured film response with time. EBT film was shown to have the following properties: near water equivalent atomic weight (Zeff); dynamic dose range of (10-1-102) Gy; 3% change in optical density (OD) response for a single exposure level when exposed to radiation energies from (75-18,000) kV; and best digitized using transmission densitometry. XRQA film was shown to have: a Zeff of ~25; a 12 fold increase in sensitivity at lower photon energies for a dynamic dose range of 10-3-100 Gy, a difference of 25% in OD response when comparing 120 kV to 320 kV, and best digitized using reflective densitometry. Both XRQA and EBT films were shown to have: a temporal stability (ΔOD) of ~1% for t > 24 hr post film exposure for up to ~20 days; a change in dose response of ~0.03 mGy hr-1 when exposed to fluorescent room lighting at standard room temperature and humidity levels; a negligible dose rate and fractionation effect when operated within the optimal dose ranges; and a light wavelength dependence with dose for film readout.
The flat bed scanner was chosen as the primary film digitizer due to its availability, cost, OD range, functionality (transmission and reflection scanning), and digitization speed. As a cost verses functionality comparison, the intrinsic and operational limitations were determined for two flat bed scanners. The EPSON V700 and 10000XL exhibited equal spatial and OD accuracy. The combined precision of both the scanner light sources and CCD sensors measured < 2% and < 7% deviation in pixel light intensities for 50 consecutive scans, respectively. Both scanner light sources were shown to be uniform in transmission and reflection scan modes along the center axis of light source translation. Additionally, RCFs demonstrated a larger dynamic range in pixel light intensities, and to be less sensitive to off axis light inhomogeneity, when scanned in landscape mode (long axis of film parallel with axis of light source translation). The EPSON 10000XL demonstrated slightly better light source/CCD temporal stability and provided a capacity to scan larger film formats at the center of the scanner in landscape mode. However, the EPSON V700 only measured an overall difference in accuracy and precision by 2%, and though smaller in size, at the time of this work, was one sixth the cost of the 10000XL. A scan protocol was developed to maximize RCF digitization accuracy and precision, and a calibration fitting function was developed for RCF absolute dosimetry. The fitting function demonstrated a superior goodness of fit for both RCF types over a large range of absorbed dose levels as compared to the currently accepted function found in literature.
The RCF dosimetry system was applied to three novel areas from which a benefit could be derived for 2D or 3D dosimetric information. The first area was for a 3D dosimetry of a pendant breast in 3D-CT mammography. The novel method of developing a volumetric image of the breast from a CT acquisition technique was empirically measured for its dosimetry and compared to standard dual field digital mammography. The second area was dose reduction in CT for pediatric and adult scan protocols. In this application, novel methodologies were developed to measure 3D organ dosimetry and characterize a dose reduction scan protocol for pediatric and adult body habitus. The third area was in the field of small animal irradiation for radiobiology purposes and cancer patient treatment verification. Two methods for small animal irradiation were analyzed for their dosimetry. The first technique was within a gamma irradiator environment using a 137Cs source (663 keV), and the second, a novel approach to mouse irradiation, was developed for fast neutron (10 MeV) irradiated by a Tandem Van de Graff accelerator in a 2H(d,n)3He reaction. For the patient cancer treatment, RCF was used to verify a 3D radiochromic plastic, PRESAGETM, using multi-leaf collimation (MLC) on a medical linear accelerator (LINAC) with 6 MV x-rays. The RCF and PRESAGETM dosimeters were employed to verify a simple respiratory-gated lung treatment for a small nodule; the film was considered the gold standard. In every case, the RCF dosimetry system was verified for accuracy using a traditional PDD as the golden standard. When considering all areas of radiation energy applications, the RCF dosimetry system agreed to better than 7% of the golden standard, and in some cases within better than 1%. In many instances, this work provided vital dosimetric information that otherwise was not captured using the PDD in similar geometry. This work demonstrates the need for RCF to more accurately measure volumetric dose.
Item Open Access Effects of High Volume MOSFET Usage on Dosimetry in Pediatric CT, Pediatric Lens of the Eye Dose Reduction Using Siemens Care kV, & Designing Quality Assurance of a Cesium Calibration Source(2017) Smith, Aaron KennethProject 1: Effects of High Volume MOSFET Usage on Dosimetry in Pediatric CT
Purpose
The objective of this study was to determine if using large numbers of Metal-Oxide-Semiconducting-Field-Effect Transistors, MOSFETs, effects the results of dosimetry studies done with pediatric phantoms due to the attenuation properties of the MOSFETs. The two primary focuses of the study were first to experimentally determine the degree to which high numbers of MOSFET detectors attenuate an X-ray beam of Computed Tomography (CT) quality and second, to experimentally verify the effect that the large number of MOSFETs have on dose in a pediatric phantom undergoing a routine CT examination.
Materials and Methods
A Precision X-Ray X-Rad 320 set to 120kVp with an effective half value layer of 7.30mm aluminum was used in concert with a tissue equivalent block phantom and several used MOSFET cables to determine the attenuation properties of the MOSFET cables by measuring the dose (via a 0.18cc ion chamber) given to a point in the center of the phantom in a 0.5 min exposure with a variety of MOSFET arrangements. After the attenuating properties of the cables were known, a GE Discovery 750 CT scanner was employed using a routine chest CT protocol in concert with a 10-year-old Atom Dosimetry Phantom and MOSFET dosimeters in 5 different locations in and on the phantom (upper left lung (ULL), upper right lung (URL), lower left lung (LLL), lower right lung (LRL), and the center of the chest to represent skin dose). Twenty-eight used MOSFET cables were arranged and taped on the chest of the phantom to cover 30% of the circumference of the phantom (19.2 cm). Scans using tube current modulation and not using tube current modulation were taken at 30, 20, 10, and 0% circumference coverage and 28 MOSFETs bundled and laid to the side of the phantom. The dose to the various MOSFET locations in and on the chest were calculated and the image quality was accessed in several of these situations by taking the standard deviation of a large regions of interest in both the lung and the soft tissue of the chest to measure the noise.
Results
The proof of concept experiment found that the main cable of the MOSET, not the ends closest to the reading tip, is the most attenuating part of the cable. The proof of concept also found that increasing the number of MOSFET layers to 1, 2, 3, and 4 layers decreased the dose to the center of the phantom by 17.92, 28.04, 39.98, 42.49% respectively. Increasing the percent of the block phantom covered to 10, 30, and 50% coverage decreased the dose to the center of the phantom by 17.92, 17.80, and 18.17% respectively.
Project 2: Pediatric Lens of the Eye Dose Reduction Using Siemens Care kV
Purpose
The Siemens Care kV is a software that recommends a tube potential (kV) setting for CT scans based on the thickness of the anatomy being scanned in order to reduce dose on a patient to patient basis. Pediatric cranial scans at Duke do not use this software nor do they use tube current modulation. Dose to the lens of the eye in pediatric patients can lead to lens opacity later in life [10]. The goal of this project was to determine if Care kV can be used in pediatric cranial scans to reduce the dose to the lens of the eye while maintaining adequate image quality.
Materials and Methods
A Siemens SOMATOM Force CT scanner performing a routine cranial scan protocol was used in concert with two Atom Dosimetry Phantoms (1-year-old and 5-year-old) and MOSFET dosimeters to determine the effect changing the reference tube potential of the Care kV software would have on dose and image quality (measured with CNR). The settings used with Care kV were off, and semi reference tube potential 120, 110, and 100 kV.
Results
Dose to the lens of the eye was reduced for the 1-year old phantom by 9.601, 17.572, and 19.724% by using Care kV with tube potential set to 120, 110 and 100 kV respectively. Dose to the lens of the eye was reduced for the 5-year old phantom by 1.060, 8.859, and17.854% by using Care kV with tube potential set to 120, 110 and 100 kV respectively. Soft tissue CNR was reduced for the 1-year old phantom by 8.812, 11.001, and 5.018% by using Care kV with tube potential set to 120, 110 and 100 kV respectively. Soft tissue CNR was reduced for the 5-year old phantom by 3.473, 5.517, and 3.248% by using Care kV with tube potential set to 120, 110 and 100 kV respectively. Bone CNR was reduced for the 1-year old phantom by 4.447, 8.175, and 10.046% by using Care kV with tube potential set to 120, 110 and 100 kV respectively. Bone CNR was reduced for the 5-year old phantom by 4.782, 7.966, and 11.715% by using Care kV with tube potential set to 120, 110 and 100 kV respectively.
Project 3: Designing Quality Assurance for Cesium Calibration Source
Purpose
North Caroline regulations state that survey meters must be traceable to NIST. The Cs-137 Calibration source used by Duke was installed in 2005 and has since not been measured except for routine calibration of survey meters. The goal of this project was to measure the geometry and dose rate of the source and make a recommendation as to how and how often quality assurance measurements should be made with a NIST traceable ion chamber.
Materials and Methods
Gafchromic XR QA2 radiochromic film was placed in the source beam to measure the angle of the source collimator. Two 0.18 cc and a 6 cc ion chamber were used in a variety of combinations of distance from source and attenuation to determine the exposure rate of the calibration source and compare it to the current calibration table in use.
Results
The collimator angles for the top, bottom, left, and right were calculated to be 12.13, 9.648, 11.58, and 11.58, respectively. The two 0.18 cc ion chambers deviated from the table values by more than 30% for every measurement. The 6 cc ion chamber deviated from the calibration table in use by 9.55, 8.13, 3.36, and 3.72% for 30 cm no attenuation, 30 cm 2x attenuation, 100 cm no attenuation, and 100 cm 2x attenuation measurements respectively.
Item Open Access Functional and Molecular Imaging Using Nanoparticle Contrast Agents for Dual-Energy Computed Tomography(2017) Ashton, Jeffrey RonaldX-ray computed tomography (CT) is one of the most useful diagnostic tools for clinicians, with widespread availability, fast scan times, and low cost. CT imaging can reveal a patient’s anatomy in exquisite detail and is extremely useful in the diagnosis of a wide variety of diseases. However, CT is currently limited to anatomical imaging due to the lack of appropriate contrast agents and imaging protocols that would allow for molecular imaging, so clinicians must instead rely on other modalities which are more expensive and less readily available. Dual energy CT, a relatively new technique in which two x-ray energies are used for a single scan, can provide valuable information about tissue material composition. This information can potentially be used for molecular imaging if it is coupled with appropriately-designed contrast agents.
This work aims to extend the use of CT into the molecular imaging realm by developing and testing nanoparticle contrast agents for use with dual energy CT. Several studies were carried out, each of which focused on a different application of using nanoparticle contrast agents together with dual energy CT for molecular imaging.
A commercial blood pool iodine contrast agent for pre-clinical CT (Exia-160) has been shown to accumulate in the myocardium and continue to enhance the myocardium after the contrast agent has been cleared from the bloodstream. It was hypothesized that this agent would not accumulate in infarcted myocardium, which would allow for specific identification of myocardial infarction by CT. Mice were injected with the contrast agent following myocardial infarction, and dual energy CT was used to identify the iodine within the myocardium and separate the iodine from the calcium in the neighboring ribs. Regions of myocardial infarction showed no enhancement on CT, while the healthy myocardium was highly enhanced. Size and position of myocardial infarction determined by dual energy CT were compared with the standard molecular imaging technique for measuring myocardial viability (SPECT). It was found that dual energy CT using this nanoparticle contrast agent reliably agreed with the gold standard molecular imaging method.
Molecular imaging for the improved detection and characterization of lung tumors was also explored through two different studies. The first study used both gold nanoparticles and iodine-containing liposomes together with dual energy CT in order to measure tumor vascular functional parameters, including tumor fractional blood volume and vascular permeability. These dual energy CT measurements were confirmed with ex vivo tissue analysis to demonstrate the validity and accuracy of the in vivo dual energy CT method. The second study used antibody-targeted gold nanoparticles to image EGFR-positive tumors. Two different types of antibodies were tested: a clinically used humanized anti-EGFR antibody, and a small llama-derived single domain anti-EGFR antibody. The single domain antibody showed improved blood half-life and reduced immune clearance compared to the full-sized antibody when attached to gold nanoparticles, but the higher affinity of the full-sized antibody led to much higher overall tumor accumulation. This antibody significantly increased the accumulation of gold nanoparticles in tumors expressing high levels of EGFR. Together, these two studies showed that dual energy CT and nanoparticle contrast agents can be used to measure a wide variety of tumor functional parameters, including tumor vascular density, vascular permeability, and receptor expression. All these parameters can be combined with the anatomical CT imaging to better characterize lung tumors and differentiate between benign and malignant lesions.
The use of dual energy CT for measuring tumor vascular permeability changes after gold nanoparticle-augmented radiation therapy was also demonstrated. Liposomal iodine was injected into mice following radiation therapy in order to measure vascular permeability. Dual energy CT was used to differentiate the signal of the liposomal iodine from the CT signal of the gold nanoparticles already in the tumor. Tumor permeability was measured with CT using multiple combinations of gold nanoparticles and radiation doses to find the optimal conditions for enhancing the effect of radiation therapy on the vasculature. These conditions were then used to increase the delivery of a liposomal chemotherapy agent to tumors. Tumors treated with the gold-augmented radiation therapy and liposomal drug showed significant growth delay compared to the other groups, confirming the predictions made in the dual energy CT imaging.
Finally, a protease-responsive contrast agent was developed for use with dual energy CT imaging. Clusters of gold nanoparticles cross-linked together by protease-sensitive peptides were injected into mice along with liposomal iodine. In the presence of tumor proteases, the clusters degraded and the concentration of gold within the tumor decreased. Clusters without the protease-sensitive peptide did not degrade and did not leave the tumors. The ratio of iodine to gold in each tumor was measured, and it was found that the ratio was significantly higher in mice injected with the degradable gold clusters compared to mice injected with non-degradable control clusters. This demonstrated the ability to use multiple contrast agents with dual energy CT for enzyme-specific ratiometric molecular imaging.
This work confirms that dual energy CT can be used along with multiple nanoparticle contrast agents for molecular imaging applications. With continued contrast agent development and further application of dual energy CT, these methods can potentially be applied clinically to improve the power of CT imaging and improve diagnosis of a wide variety of pathologies.
Item Open Access Hybrid Reference Datasets for Quantitative Computed Tomography Characterization and Conformance(2018) Robins, MarthonyX-ray computed tomography (CT) imaging is the second most commonly used clinical imaging modality with an estimated 82 million clinical exams performed in the U.S in 2016. Despite an average annual decline of 2% since a high of 85.3 million in 2011, it is highly sought for visualizing a host of medical conditions because of its clinical advantages in providing high spatial resolution and fast imaging time. Although limited, the high resolution of CT imaging enables small objects such as lesions to be realized with good detail. Partly due to their size and the fact that CT is noise and resolution limited, the effects of system resolution and lesion characterization processes (i.e., segmentation and CAD algorithms) are challenging to quantify. For this reason, there is a significant need to account for system resolution and algorithm impact on lesion characterization in a quantitatively reproducible manner.
Cancer is the second leading cause of death in the U.S. A fundamental aspect of cancer diagnosis, treatment and management is effective use of medical imaging. In recent years, cancer screening has received significant attention. In fact, results of screening suggest that early cancer detection can result in higher survival rates.
Beyond just visual inspection, extraction of quantitative lesion features could provide more diagnostic and treatment benefits. Assessing the quantitative capabilities of CT systems is complicated by technical factors such as noise, blur, and motion artifacts. As such, traditional modulation transfer function (MTF) methods are insufficient in characterizing system resolution, especially when non-linearities are introduced by iterative reconstruction. These aforementioned factors contribute to a major component of lesion characterization uncertainty in that they limit the apprehension of lesion ground truth. That being said, there is a wealth of quantifiable information that can be garnered from clinical images, since lesion size, morphology, and potentially texture (i.e., internal heterogeneities) are important quantitative biomarkers for effective clinical decision-making. Considering this, the imaging physics community is steadily progressing toward a quantitative paradigm in CT.
As such, the purpose of this doctoral project was to develop, validate, and disseminate a new phantom, image databases and assessment tools that are appropriate for ground truth lesion characterization in the context of modern x-ray computed tomography (CT) systems. The project developed lesion assessment methods in the framework of two distinct modes, (a) anthropomorphic phantoms and (b) clinical images.
As an alternative to the MTF, the first aspect of this project aimed at validating the task transfer function (TTF), which is a quantitative measure of system resolution. TTF was used as a means to account for the accurate modeling of low-contrast signal transfer properties of a non-linear imaging system. This study assessed the TTF as a CT system resolution model for lesion blur in the context of reconstruction algorithm, dose, and lesion size, shape, and contrast. TTF-blurred simulated lesions were compared with CT images of corresponding physical lesions using a series of comparative tools. Amidst the presence of confounding factors, in a multiple alternative forced-choice testing paradigm (4AFC) reader study, it was found that readers performed a little better than random guessing in detecting simulated lesions at a rate of 37.9±3.1% (25% implied random guessing). The visual appearance, edge-blur, size, and shape of simulated lesions were similar to the physical lesions, which suggested 3D-TTF modeled the low-contrast signal transfer properties of this non-linear CT reasonably well.
In the second study, the TTF became a useful tool for effective implementation in lesion simulation and virtual insertion. A TTF-based lesion simulation framework was developed to model lesion’s morphology in terms of size and shape. The Lungman phantom (Kyoto, Japan) was used in the implementation of two new virtual lesion insertion methods (i.e., the projection- and image-domain virtual lesion insertion methods). A third method was also used as a benchmark which was previously developed by the U.S. Food and Drug Administration (FDA). Using these TTF-based insertion methods, TTF-blurred computer aided design (CAD) lesions were virtually inserted into phantom CT projections or reconstructed data. This study compared a series of virtually-inserted, TTF-blurred CAD lesions against a corresponding series of CT-blurred physical lesions. Pair-wise comparisons were made in terms of size and shape, yielding a 3% difference in volume and a 5% difference in shape between physical and simulated lesions. This study provided indication that the proposed lesion modeling framework could quantitatively produce realistic surrogates to real lesion.
Third, a systematic assessment of bias and variability in lesion texture feature measurement was performed across a series of clinical image acquisition settings and reconstruction algorithms. A series of CT images using three computational phantoms with anatomically-informed texture were simulated representing four in-plane pixel sizes, three slice thicknesses, three dose levels, and 33 noise and resolution models, characteristic of five commercial scanners (GE LightSpeed VCT, GE Discovery 750 HD, GE Revolution, Siemens Definition Flash, and Siemens Force). 21 statistical texture features were calculated and compared between the ground truth phantom (i.e., pre-imaging) and its corresponding post-imaging simulations. Also, each texture feature was measured with four unique volumes of interest (VOIs) sizes. Across, VOI sizes and imaging settings, the percent relative difference ranged [-97%, 1230%], and the coefficient of variation ranged [1.12%, 71.79%], between the post-imaging simulation and the ground truth. The dynamic range of results indicate that image acquisition and reconstruction conditions (i.e., in-plane pixel sizes, slice thicknesses, dose levels, and reconstruction kernels) can lead to significant bias and variability in texture feature measurements. These results indicate that reconstruction and segmentation had notable effects on the bias and variability of feature measurement, thus, underscoring the need to appropriately account for system and segmentation effects on lesion characterization.
Building on the results of the TTF validation study, techniques for virtual lesion insertion study, and the texture feature assessment study, the next three studies focused on developing and validating hybrid datasets (i.e., insertion of simulated lesions into phantom and patient CT images). The fourth study was intended to determine whether interchangeability exist between real and simulated lesions in the context of patient CT images. Virtual lesions were generated based on real patient lesions extracted from the Reference Image Database to Evaluate Therapy Response (RIDER) CT dataset and were compared with their real counterparts based on lesion size. 30 pathologically-confirmed malignancies from thoracic patient CT images were modeled. Simulated lesions were re-inserted into the original CT images using the image-domain insertion program. Four readers performed volume measurements using three commercial segmentation tools. The relative volume estimation performance of segmentation tools was done to compare measures of real lesions in actual patient CT images and simulated lesions virtually inserted into the same patient images (i.e., hybrid datasets). Direct volume comparison showed consistent trends between real and simulated lesions across all segmentation algorithms, readers, and lesion shapes. Overall, there was a 5% volumetric difference between real and simulated lesions. The results of this study add to the realization of the potential applications of virtual lesions as surrogates to real clinical lesions, not just in terms of appearance, by also quantitatively.
In a fifth study, a new approach was designed to evaluate the potential for hybrid datasets with a priori known lesion volume, to serve as a replacement to clinical images in the context of segmentation algorithm compliance with the Quantitative Imaging Biomarkers Alliance (QIBA) Profile outline. This study occurred in two phases, namely a phantom and clinical phase. The phantom phase utilized the Lungman phantom and the clinical phase utilized the same base patient images from the RIDER dataset. In the phantom, hybrid datasets were generated by virtually inserting 16 simulated lesions corresponding to physical lesions into the phantom images using the projection- and image-domain (Method 1 and Method 2) techniques from the second study, along with the FDA (Method 3) technique. For the clinical data, only Method 2 was used to insert simulated lesions corresponding to real lesions. In all, across 16 participating groups, results showed that none of the virtual insertion methods were equivalent to the physical phantom based on a 5% bias margin of tolerance. However, the magnitude of this difference was small (across all groups, 2.4%, 5.4%, and 2% for Methods 1, 2, and 3, respectively).
The final aspect of this project aimed at developing hybrid datasets for use by the wider imaging community. These were composed of anthropomorphic lung and liver lesions, embedded in thoracic and abdominal images as a means to help assess lesion characterization directly from patient images. Each dataset was outfitted with a full complement of descriptive information for each inserted lesion including lesion size, shape, texture, and contrast.
In conclusion, this dissertation provides to the scientific community a new phantom, analysis techniques, modeling tools, and datasets that can aid in appropriately evaluating lesion characterization in modern CT systems. The new techniques proposed by this dissertation offer a more clinically relevant approach to assessing the impact of CT system and segmentation/CADx algorithms on lesion characterization.
Item Open Access Impact of CT Simulation Parameters on the Realism of Virtual Imaging Trials(2023) Montero, Isabel SeraphinaVirtual imaging trials (VITs) provide the opportunity to conduct medical imaging experiments otherwise not feasible through patient images. The reliability of these virtual trials is directly dependent upon their ability to replicate clinical imaging experiments. The combined effect of various key simulation parameters on the closeness of virtual images to experimental images has not yet been explicitly quantified, which this sensitivity study aimed to address. To do so, a physical phantom, Mercury 3.0 (Sun Nuclear), was scanned using a clinical scanner (Siemens Force). Meanwhile, utilizing a validated CT simulator (DukeSim), a computational version of the Mercury 3.0 phantom was virtually imaged, emulating the same scanner model and imaging acquisition settings. The simulations were performed with varied parameters for the x-ray source, phantom model, and detector characteristics, evaluating their impact on the realism of the final reconstructed virtual images. Simulations were explicitly conducted and evaluated various source and detector subsampling (1 – 5 per side), phantom voxel resolution (0.1 mm – 0.5mm), anode heel severity (0% - 40% over anode-cathode axis), aluminum filtration (0.9cm - 1.1cm), and pixel-to-pixel detector crosstalk (0 – 10.5%, 0 – 15% per dimension). The real and simulated projections were then reconstructed, employing a vendor-specific reconstruction software (Siemens ReconCT), with identical reconstruction settings. The real and simulated images were then compared in terms of modulation transfer function (MTF), noise magnitude, noise power spectrum (NPS), and CT number accuracy. When the optimal simulation parameters were selected, the simulated images closely replicated real images (0.80% relative error in f50air metric). The error in the f50 measurements were highly sensitive to the variation of source and detector subsampling and phantom voxel size. The relative error in the noise magnitude measurements were not highly sensitive to the variation of source and detector subsampling or phantom voxel size but were sensitive to the modeling of the anode heel effect severity. The error in the nNPS measurements were not highly sensitive to the variation of source and detector subsampling, phantom voxel size, degree of anode heel severity, aluminum filtration, or detector cross talk. Finally, the error in the CT number accuracy measurements were not highly sensitive to the variation of source and detector subsampling, phantom voxel size, aluminum filtration, or degree of detector cross talk, but were sensitive to the modeling of anode heel severity. Through this study, the effects of various key simulation parameters on the realism of scanner-specific simulations were assessed. Certain simulation parameters, such as source and detector subsampling, and degree of anode heel severity, exert greater influence on simulation realism than others, thus they should be prioritized when exploring novel modeling avenues.
Item Open Access Interatrial septum: A pictorial review of congenital and acquired pathologies and their management.(Clinical imaging, 2019-02-06) Khoshpouri, Pegah; Khoshpouri, Parisa; Bedayat, Arash; Ansari-Gilani, Kianoush; Chalian, HamidThere are many different congenital abnormalities and acquired pathologies involving the interatrial septum. Differentiation of these pathologies significantly affects patient management. We have reviewed the various interatrial septal pathologies and discussed their congenital associates, clinical significance, and management. After reading this article, the reader should be able to better characterize the interatrial septal pathologies using the optimal imaging tools, and have a better understanding of their clinical significance and management.Item Open Access Investigation of Imaging Capabilities for Dual Cone-Beam Computed Tomography(2013) Li, HaoA bench-top dual cone-beam computed tomography (CBCT) system was developed consisting of two orthogonally placed 40x30 cm2 flat-panel detectors and two conventional X-ray tubes with two individual high-voltage generators sharing the same rotational axis. The X-ray source to detector distance is 150 cm and X-ray source to rotational axis distance is 100 cm for both subsystems. The objects are scanned through 200° of rotation. The dual CBCT (DCBCT) system utilized 110° of projection data from one detector and 90° from the other while the two individual single CBCTs utilized 200° data from each detector. The system performance was characterized in terms of uniformity, contrast, spatial resolution, noise power spectrum and CT number linearity. The uniformity, within the axial slice and along the longitudinal direction, and noise power spectrum were assessed by scanning a water bucket; the contrast and CT number linearity were measured using the Catphan phantom; and the spatial resolution was evaluated using a tungsten wire phantom. A skull phantom and a ham were also scanned to provide qualitative evaluation of high- and low-contrast resolution. Each measurement was compared between dual and single CBCT systems.
Compared with single CBCT, the DCBCT presented: 1) a decrease in uniformity by 1.9% in axial view and 1.1% in the longitudinal view, as averaged for four energies (80, 100, 125 and 150 kVp); 2) comparable or slightly better contrast to noise ratio (CNR) for low-contrast objects and comparable contrast for high-contrast objects; 3) comparable spatial resolution; 4) comparable CT number linearity with R2 ≥ 0.99 for all four tested energies; 5) lower noise power spectrum in magnitude. DCBCT images of the skull phantom and the ham demonstrated both high-contrast resolution and good soft-tissue contrast.
One of the major challenges for clinical implementation of four-dimensional (4D) CBCT is the long scan time. To investigate the 4D imaging capabilities of the DCBCT system, motion phantom studies were conducted to validate the efficiency by comparing 4D images generated from 4D-DCBCT and 4D-CBCT. First, a simple sinusoidal profile was used to confirm the scan time reduction. Next, both irregular sinusoidal and patient-derived profiles were used to investigate the advantage of temporally correlated orthogonal projections due to a reduced scan time. Normalized mutual information (NMI) between 4D-DCBCT and 4D-CBCT was used for quantitative evaluation.
For the simple sinusoidal profile, the average NMI for ten phases between two single 4D-CBCTs was 0.336, indicating the maximum NMI that can be achieved for this study. The average NMIs between 4D-DCBCT and each single 4D-CBCT were 0.331 and 0.320. For both irregular sinusoidal and patient-derived profiles, 4D-DCBCT generated phase images with less motion blurring when compared with single 4D-CBCT.
For dual kV energy imaging, we acquired 80kVp projections and 150 kVp projections, with an additional 0.8 mm tin filtration. The virtual monochromatic (VM) technique was implemented, by first decomposing these projections into acrylic and aluminum basis material projections to synthesize VM projections, which were then used to reconstruct VM CBCTs. The effect of the VM CBCT on metal artifact reduction was evaluated with an in-house titanium-BB phantom. The optimal VM energy to maximize CNR for iodine contrast and minimize beam hardening in VM CBCT was determined using a water phantom containing two iodine concentrations. The linearly-mixed (LM) technique was implemented by linearly combining the low- (80kVp) and high-energy (150kVp) CBCTs. The dose partitioning between low- and high-energy CBCTs was varied (20%, 40%, 60% and 80% for low-energy) while keeping total dose approximately equal to single-energy CBCTs, measured using an ion chamber. Noise levels and CNRs for four tissue types were investigated for dual-energy LM CBCTs in comparison with single-energy CBCTs at 80, 100, 125 and 150kVp.
The VM technique showed a substantial reduction of metal artifacts at 100 keV with a 40% reduction in the background standard deviation compared with a 125 kVp single-energy scan of equal dose. The VM energy to maximize CNR for both iodine concentrations and minimize beam hardening in the metal-free object was 50 keV and 60 keV, respectively. The difference in average noise levels measured in the phantom background was 1.2% for dual-energy LM CBCTs and equivalent-dose single-energy CBCTs. CNR values in the LM CBCTs of any dose partitioning were better than those of 150 kVp single-energy CBCTs. The average CNRs for four tissue types with 80% dose fraction at low-energy showed 9.0% and 4.1% improvement relative to 100 kVp and 125 kVp single-energy CBCTs, respectively. CNRs for low contrast objects improved as dose partitioning was more heavily weighted towards low-energy (80kVp) for LM CBCTs.
For application of the dual-energy technique in the kilovoltage (kV) and megavoltage (MV) range, we acquired both MV projections (from gantry angle of 0° to 100°) and kV projections (90° to 200°) with the current orthogonal kV/MV imaging hardware equipped in modern linear accelerators, as gantry rotated a total of 110°. A selected range of overlap projections between 90° to 100° were then decomposed into two material projections using experimentally determined parameters from orthogonally stacked aluminum and acrylic step-wedges. Given attenuation coefficients of aluminum and acrylic at a predetermined energy, one set of VM projections could be synthesized from two corresponding sets of decomposed projections. Two linear functions were generated using projection information at overlap angles to convert kV and MV projections at non-overlap angles to approximate VM projections for CBCT reconstruction. The CNRs were calculated for different inserts in VM CBCTs of a CatPhan phantom with various selected energies and compared with those in kV and MV CBCTs. The effect of overlap projection number on CNR was evaluated. Additionally, the effect of beam orientation was studied by scanning the CatPhan sandwiched with two 5 cm solid-water phantoms on both lateral sides and an electronic density phantom with two metal bolt inserts.
Proper selection of VM energy (30keV and 40keV for low-density polyethylene (LDPE), polymethylpentene (PMP), 2MeV for Delrin) provided comparable or even better CNR results as compared with kV or MV CBCT. An increased number of overlap between kV and MV projections demonstrated only marginal improvements of CNR for different inserts (with the exception of LDPE) and therefore one projection overlap was found to be sufficient for the CatPhan study. It was also evident that the optimal CBCT image quality was achieved when MV beams penetrated through the heavy attenuation direction of the object.
In conclusion, the performance of a bench-top DCBCT imaging system has been characterized and is comparable to that of a single CBCT. The 4D-DCBCT provides an efficient 4D imaging technique for motion management. The scan time is reduced by approximately a factor of two. The temporally correlated orthogonal projections improved the image blur across 4D phase images. Dual-energy CBCT imaging techniques were implemented to synthesize VM CBCT and LM CBCTs. VM CBCT was effective at achieving metal artifact reduction. Depending on the dose-partitioning scheme, LM CBCT demonstrated the potential to improve CNR for low contrast objects compared with single-energy CBCT acquired with equivalent dose. A novel technique was developed to generate VM CBCTs from kV/MV projections. This technique has the potential to improve CNR at selected VM energies and to suppress artifacts at appropriate beam orientations.
Item Open Access Minimum Detectability and Dose Analysis for Size-based Optimization of CT Protocols(2014) Smitherman, Christopher CraigPurpose: To develop a comprehensive model of task-based performance of CT across a broad library of CT protocols, so that radiation dose and image quality can be optimized within a large multi-vendor clinical facility.
Methods: 80 adult CT protocols from the Duke University Medical Center were grouped into 23 protocol groups with similar acquisition characteristics. A size-based image quality phantom (Duke Mercury Phantom 2.0) was imaged using these protocol groups for a range of clinically relevant dose levels on two CT manufacturer platforms (Siemens SOMATOM Definition Flash and GE CT750 HD). For each protocol group, phantom size, and dose level, the images were analyzed to extract task-based image quality metrics, the task transfer function (TTF) and the noise power spectrum (NPS). The TTF and NPS were further combined with generalized models of lesion task functions to predict the detectability of the lesions in terms of areas under the receiver operating characteristic curve (Az). A graphical user interface (GUI) was developed to present Az as a function of lesion size and contrast, dose, patient size, and protocol, as well as to derive the necessary dose to achieve a detection threshold for a targeted lesion.
Results: The GUI provided the prediction of Az values modeling detection confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 indicated a dose requirement of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm and 50-HU lesion contrast, the minimum detected lesion size at those dose levels were predicted to be 8.4, 5.0, and 3.9 mm, respectively.
Conclusions: A CT protocol optimization platform was developed by combining task-based detectability calculations with a GUI that demonstrates the tradeoff between dose and image quality. The platform can be used to improve individual protocol dose efficiency, as well as to improve protocol consistency across various patient diameters and CT scanners. The GUI can further be used to calculate personalized dose for individualized examination tasks.
Item Open Access Predicting Task-specific Performance for Iterative Reconstruction in Computed Tomography(2014) Chen, BaiyuThe cross-sectional images of computed tomography (CT) are calculated from a series of projections using reconstruction methods. Recently introduced on clinical CT scanners, iterative reconstruction (IR) method enables potential patient dose reduction with significantly reduced image noise, but is limited by its "waxy" texture and nonlinear nature. To balance the advantages and disadvantages of IR, evaluations are needed with diagnostic accuracy as the endpoint. Moreover, evaluations need to take into consideration the type of the imaging task (detection and quantification), the properties of the task (lesion size, contrast, edge profile, etc.), and other acquisition and reconstruction parameters.
To evaluate detection tasks, the more acceptable method is observer studies, which involve image preparation, graphical user interface setup, manual detection and scoring, and statistical analyses. Because such evaluation can be time consuming, mathematical models have been proposed to efficiently predict observer performance in terms of a detectability index (d'). However, certain assumptions such as system linearity may need to be made, thus limiting the application of the models to potentially nonlinear IR. For evaluating quantification tasks, conventional method can also be time consuming as it usually involves experiments with anthropomorphic phantoms. A mathematical model similar to d' was therefore proposed for the prediction of volume quantification performance, named the estimability index (e'). However, this prior model was limited in its modeling of the task, modeling of the volume segmentation process, and assumption of system linearity.
To expand prior d' and e' models to the evaluations of IR performance, the first part of this dissertation developed an experimental methodology to characterize image noise and resolution in a manner that was relevant to nonlinear IR. Results showed that this method was efficient and meaningful in characterizing the system performance accounting for the non-linearity of IR at multiple contrast and noise levels. It was also shown that when certain criteria were met, the measurement error could be controlled to be less than 10% to allow challenging measuring conditions with low object contrast and high image noise.
The second part of this dissertation incorporated the noise and resolution characterizations developed in the first part into the d' calculations, and evaluated the performance of IR and conventional filtered backprojection (FBP) for detection tasks. Results showed that compared to FBP, IR required less dose to achieve a threshold performance accuracy level, therefore potentially reducing the required dose. The dose saving potential of IR was not constant, but dependent on the task properties, with subtle tasks (small size and low contrast) enabling more dose saving than conspicuous tasks. Results also showed that at a fixed dose level, IR allowed more subtle tasks to exceed a threshold performance level, demonstrating the overall superior performance of IR for detection tasks.
The third part of this dissertation evaluated IR performance in volume quantification tasks with conventional experimental method. The volume quantification performance of IR was measured using an anthropomorphic chest phantom and compared to FBP in terms of accuracy and precision. Results showed that across a wide range of dose and slice thickness, IR led to accuracy significantly different from that of FBP, highlighting the importance of calibrating or expanding current segmentation software to incorporate the image characteristics of IR. Results also showed that despite IR's great noise reduction in uniform regions, IR in general had quantification precision similar to that of FBP, possibly due to IR's diminished noise reduction at edges (such as nodule boundaries) and IR's loss of resolution at low dose levels.
The last part of this dissertation mathematically predicted IR performance in volume quantification tasks with an e' model that was extended in three respects, including the task modeling, the segmentation software modeling, and the characterizations of noise and resolution properties. Results showed that the extended e' model correlated with experimental precision across a range of image acquisition protocols, nodule sizes, and segmentation software. In addition, compared to experimental assessments of quantification performance, e' was significantly reduced in computational time, such that it can be easily employed in clinical studies to verify quantitative compliance and to optimize clinical protocols for CT volumetry.
The research in this dissertation has two important clinical implications. First, because d' values reflect the percent of detection accuracy and e' values reflect the quantification precision, this work provides a framework for evaluating IR with diagnostic accuracy as the endpoint. Second, because the calculations of d' and e' models are much more efficient compared to conventional observer studies, the clinical protocols with IR can be optimized in a timely fashion, and the compliance of clinical performance can be examined routinely.
Item Open Access Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance(2016) Tian, XiaoyuX-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Item Open Access QIBA guidance: Computed tomography imaging for COVID-19 quantitative imaging applications.(Clinical imaging, 2021-02-25) Avila, Ricardo S; Fain, Sean B; Hatt, Chuck; Armato, Samuel G; Mulshine, James L; Gierada, David; Silva, Mario; Lynch, David A; Hoffman, Eric A; Ranallo, Frank N; Mayo, John R; Yankelevitz, David; Estepar, Raul San Jose; Subramaniam, Raja; Henschke, Claudia I; Guimaraes, Alex; Sullivan, Daniel CAs the COVID-19 pandemic impacts global populations, computed tomography (CT) lung imaging is being used in many countries to help manage patient care as well as to rapidly identify potentially useful quantitative COVID-19 CT imaging biomarkers. Quantitative COVID-19 CT imaging applications, typically based on computer vision modeling and artificial intelligence algorithms, include the potential for better methods to assess COVID-19 extent and severity, assist with differential diagnosis of COVID-19 versus other respiratory conditions, and predict disease trajectory. To help accelerate the development of robust quantitative imaging algorithms and tools, it is critical that CT imaging is obtained following best practices of the quantitative lung CT imaging community. Toward this end, the Radiological Society of North America's (RSNA) Quantitative Imaging Biomarkers Alliance (QIBA) CT Lung Density Profile Committee and CT Small Lung Nodule Profile Committee developed a set of best practices to guide clinical sites using quantitative imaging solutions and to accelerate the international development of quantitative CT algorithms for COVID-19. This guidance document provides quantitative CT lung imaging recommendations for COVID-19 CT imaging, including recommended CT image acquisition settings for contemporary CT scanners. Additional best practice guidance is provided on scientific publication reporting of quantitative CT imaging methods and the importance of contributing COVID-19 CT imaging datasets to open science research databases.Item Open Access Radiation Dose to the Lens of the Eye from Computed Tomography Scans of the Head(2016) Januzis, Natalie AnnWhile it is well known that exposure to radiation can result in cataract formation, questions still remain about the presence of a dose threshold in radiation cataractogenesis. Since the exposure history from diagnostic CT exams is well documented in a patient’s medical record, the population of patients chronically exposed to radiation from head CT exams may be an interesting area to explore for further research in this area. However, there are some challenges in estimating lens dose from head CT exams. An accurate lens dosimetry model would have to account for differences in imaging protocols, differences in head size, and the use of any dose reduction methods.
The overall objective of this dissertation was to develop a comprehensive method to estimate radiation dose to the lens of the eye for patients receiving CT scans of the head. This research is comprised of a physics component, in which a lens dosimetry model was derived for head CT, and a clinical component, which involved the application of that dosimetry model to patient data.
The physics component includes experiments related to the physical measurement of the radiation dose to the lens by various types of dosimeters placed within anthropomorphic phantoms. These dosimeters include high-sensitivity MOSFETs, TLDs, and radiochromic film. The six anthropomorphic phantoms used in these experiments range in age from newborn to adult.
First, the lens dose from five clinically relevant head CT protocols was measured in the anthropomorphic phantoms with MOSFET dosimeters on two state-of-the-art CT scanners. The volume CT dose index (CTDIvol), which is a standard CT output index, was compared to the measured lens doses. Phantom age-specific CTDIvol-to-lens dose conversion factors were derived using linear regression analysis. Since head size can vary among individuals of the same age, a method was derived to estimate the CTDIvol-to-lens dose conversion factor using the effective head diameter. These conversion factors were derived for each scanner individually, but also were derived with the combined data from the two scanners as a means to investigate the feasibility of a scanner-independent method. Using the scanner-independent method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter, most of the fitted lens dose values fell within 10-15% of the measured values from the phantom study, suggesting that this is a fairly accurate method of estimating lens dose from the CTDIvol with knowledge of the patient’s head size.
Second, the dose reduction potential of organ-based tube current modulation (OB-TCM) and its effect on the CTDIvol-to-lens dose estimation method was investigated. The lens dose was measured with MOSFET dosimeters placed within the same six anthropomorphic phantoms. The phantoms were scanned with the five clinical head CT protocols with OB-TCM enabled on the one scanner model at our institution equipped with this software. The average decrease in lens dose with OB-TCM ranged from 13.5 to 26.0%. Using the size-specific method to derive the CTDIvol-to-lens dose conversion factor from the effective head diameter for protocols with OB-TCM, the majority of the fitted lens dose values fell within 15-18% of the measured values from the phantom study.
Third, the effect of gantry angulation on lens dose was investigated by measuring the lens dose with TLDs placed within the six anthropomorphic phantoms. The 2-dimensional spatial distribution of dose within the areas of the phantoms containing the orbit was measured with radiochromic film. A method was derived to determine the CTDIvol-to-lens dose conversion factor based upon distance from the primary beam scan range to the lens. The average dose to the lens region decreased substantially for almost all the phantoms (ranging from 67 to 92%) when the orbit was exposed to scattered radiation compared to the primary beam. The effectiveness of this method to reduce lens dose is highly dependent upon the shape and size of the head, which influences whether or not the angled scan range coverage can include the entire brain volume and still avoid the orbit.
The clinical component of this dissertation involved performing retrospective patient studies in the pediatric and adult populations, and reconstructing the lens doses from head CT examinations with the methods derived in the physics component. The cumulative lens doses in the patients selected for the retrospective study ranged from 40 to 1020 mGy in the pediatric group, and 53 to 2900 mGy in the adult group.
This dissertation represents a comprehensive approach to lens of the eye dosimetry in CT imaging of the head. The collected data and derived formulas can be used in future studies on radiation-induced cataracts from repeated CT imaging of the head. Additionally, it can be used in the areas of personalized patient dose management, and protocol optimization and clinician training.
Item Open Access SEGREGATION OF SIMULATED RFID MARKERS DURING HANDLING AND TRANSPORT OF WHEAT(TRANSACTIONS OF THE ASABE, 2014-01-01) Steinmeier, U; Neudecker, M; Witt, A; von Hoersten, D; Schroeter, MItem Open Access Three-dimensional computer generated breast phantom based on empirical data(MEDICAL IMAGING 2008: PHYSICS OF MEDICAL IMAGING, PTS 1-3, 2008) Li, CM; Segars, WP; Lo, JY; Veress, AI; Boone, JM; III, DJT