Browsing by Subject "Image quality"
- Results Per Page
- Sort Options
Item Open Access Advanced Techniques for Image Quality Assessment of Modern X-ray Computed Tomography Systems(2016) Solomon, Justin BennionX-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Item Open Access Automated Assessment of Image Quality and Dose Attributes in Clinical CT Images(2016) Sanders, Jeremiah WayneComputed tomography (CT) is a valuable technology to the healthcare enterprise as evidenced by the more than 70 million CT exams performed every year. As a result, CT has become the largest contributor to population doses amongst all medical imaging modalities that utilize man-made ionizing radiation. Acknowledging the fact that ionizing radiation poses a health risk, there exists the need to strike a balance between diagnostic benefit and radiation dose. Thus, to ensure that CT scanners are optimally used in the clinic, an understanding and characterization of image quality and radiation dose are essential.
The state-of-the-art in both image quality characterization and radiation dose estimation in CT are dependent on phantom based measurements reflective of systems and protocols. For image quality characterization, measurements are performed on inserts imbedded in static phantoms and the results are ascribed to clinical CT images. However, the key objective for image quality assessment should be its quantification in clinical images; that is the only characterization of image quality that clinically matters as it is most directly related to the actual quality of clinical images. Moreover, for dose estimation, phantom based dose metrics, such as CT dose index (CTDI) and size specific dose estimates (SSDE), are measured by the scanner and referenced as an indicator for radiation exposure. However, CTDI and SSDE are surrogates for dose, rather than dose per-se.
Currently there are several software packages that track the CTDI and SSDE associated with individual CT examinations. This is primarily the result of two causes. The first is due to bureaucracies and governments pressuring clinics and hospitals to monitor the radiation exposure to individuals in our society. The second is due to the personal concerns of patients who are curious about the health risks associated with the ionizing radiation exposure they receive as a result of their diagnostic procedures.
An idea that resonates with clinical imaging physicists is that patients come to the clinic to acquire quality images so they can receive a proper diagnosis, not to be exposed to ionizing radiation. Thus, while it is important to monitor the dose to patients undergoing CT examinations, it is equally, if not more important to monitor the image quality of the clinical images generated by the CT scanners throughout the hospital.
The purposes of the work presented in this thesis are threefold: (1) to develop and validate a fully automated technique to measure spatial resolution in clinical CT images, (2) to develop and validate a fully automated technique to measure image contrast in clinical CT images, and (3) to develop a fully automated technique to estimate radiation dose (not surrogates for dose) from a variety of clinical CT protocols.
Item Open Access Backscatter Spatial Coherence for Ultrasonic Image Quality Characterization: Theory and Applications(2020) Long, Willie JieAdaptive ultrasound systems, designed to automatically and dynamically tune imaging parameters based on image quality feedback, represent a promising solution for reducing the user-dependence of ultrasound. The efficacy of such systems, however, depends on the ability to accurately and reliably measure in vivo image quality with minimal user interaction -- a task for which existing image quality metrics are ill-suited. This dissertation explores the application of backscatter spatial coherence as an alternative image quality metric for adaptive imaging. Adaptive ultrasound methods applying spatial coherence feedback are evaluated in the context of three different applications: 1) the automated selection of acoustic output, 2) model-based clutter suppression in B-mode imaging, and 3) adaptive wall filtering in color flow imaging.
A novel image quality metric, known as the lag-one coherence (LOC), was introduced along with the theory that relates LOC to channel noise and the conventional image quality metrics of contrast and contrast-to-noise ratio (CNR). Simulation studies were performed to validate this theory and compare the variability of LOC to that of conventional metrics. In addition, matched measurements of LOC, contrast, CNR, and temporal correlation were obtained from harmonic phantom and liver images formed with varying mechanical index (MI) to assess the feasibility of adaptive acoustic output selection using LOC feedback. Measurements of LOC in simulation and phantom demonstrated lower variability in LOC relative to contrast and CNR over a wide range of clinically-relevant noise levels. This improved stability was supported by in vivo measurements of LOC that showed increased monotonicity with changes in MI compared to matched measurements of contrast and CNR (88.6% and 85.7% of acquisitions, respectively). The sensitivity of LOC to temporally-stable acoustic noise was evidenced by positive correlations between LOC and contrast (r=0.74) and LOC and CNR (r=0.66) at high acoustic output levels in the absence of thermal noise. Together, these properties translated to repeatable characterization of patient-specific trends in image quality that were able to demonstrate feasibility for the automated selection of acoustic output using LOC and its application for in vivo image quality feedback.
In a second study, a novel model-based adaptive imaging method called Lag-one Spatial Coherence Adaptive Normalization, or LoSCAN, was explored as a means to locally estimate and compensate for the contribution of spatially incoherent clutter from conventional delay-and-sum (DAS) images using measurements of LOC. Suppression of incoherent clutter by LoSCAN resulted in improved image quality without introducing many of the artifacts common to other coherence-based beamforming methods. In simulations with known targets and added channel noise, LoSCAN was shown to restore native contrast and increase DAS dynamic range by as much as 10-15 dB. These improvements were accompanied by DAS-like speckle texture along with reduced focal dependence and artifact compared to other coherence-based methods. Under in vivo liver and fetal imaging conditions, LoSCAN resulted in increased generalized contrast-to-noise ratio (gCNR) in nearly all matched image pairs (N = 366) with average increases of 0.01, 0.03, and 0.05 in good, fair, and poor quality DAS images, respectively, and overall changes in gCNR from -0.01 to 0.20, contrast-to-noise ratio (CNR) from -0.05 to 0.34, contrast from -9.5 to -0.1 dB, and texture mu/sigma from -0.37 to -0.001 relative to DAS.
The application of spatial coherence image quality feedback was further investigated in the context of color flow imaging to perform adaptive wall filter selection. The relationship between velocity estimation accuracy and spatial coherence was demonstrated in simulations with varying flow and clutter conditions. This relationship was leveraged to implement a novel method for coherence-based adaptive wall filtering, which selects a unique wall filter at each imaging location based on local clutter and flow properties captured by measurements of LOC and short-lag spatial coherence (SLSC). In simulations and phantom studies with known flow velocities and clutter, coherence-adaptive wall filtering was shown to reduce velocity estimation bias by suppressing low frequency energy from clutter and minimizing the attenuation of flow signal, while maintaining comparable velocity estimation variance relative to conventional wall filtering. These properties translated to in vivo color flow images of liver and fetal vessels that were able to provide direct visualization of low and high velocity flow under various cluttered imaging conditions without the manual tuning of wall filter cutoffs and/or priority thresholds.
Together, these studies present several promising applications of spatial coherence that are fundamentally unique from existing methods in ultrasound. Results in this work support the broad application of spatial coherence feedback to perform patient, window, and target-specific adjustment of imaging parameters to improve the usability and efficacy of diagnostic ultrasound.
Item Open Access Cone Beam Computed Tomography Image Quality Augmentation using Novel Deep Learning Networks(2019) Zhao, YaoPurpose: Cone beam computed tomography (CBCT) plays an important role in image guidance for interventional radiology and radiation therapy by providing 3D volumetric images of the patient. However, CBCT suffers from relatively low image quality with severe image artifacts due to the nature of the image acquisition and reconstruction process. This work investigated the feasibility of using deep learning networks to substantially augment the image quality of CBCT by learning a direct mapping from the original CBCT images to their corresponding ground truth CT images. The possibility of using deep learning for scatter correction in CBCT projections was also investigated.
Methods: Two deep learning networks, i.e. a symmetric residual convolutional neural network (SR-CNN) and a U-net convolutional network, were trained to use the input CBCT images to produce high-quality CBCT images that match with the corresponding ground truth CT images. Both clinical and Monte Carlo simulated datasets were included for model training. In order to eliminate the misalignments between CBCT and the corresponding CT, rigid registration was applied to clinical database. The binary masks achieved by Otsu auto-thresholding method were applied to for Monte Carlo simulate data to avoid the negative impact of non-anatomical structures on images. After model training, a new set of CBCT images were fed into the trained network to obtain augmented CBCT images, and the performances were evaluated and compared both qualitatively and quantitatively. The augmented CBCT images were quantitatively compared to CT using the peak-signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM).
Regarding the study for using deep learning for the scatter correction in CBCT, the scatter signal for each projection was acquired by Monte Carlo simulation. U-net model was trained to predict the scatter signals based on the original CBCT projections. Then the predicted scatter components were subtracted from the original CBCT projections to obtain scatter-corrected projections. CBCT image reconstructed by the scatter-corrected projections were quantitatively compared with that reconstructed by original projections.
Results: The augmented CBCT images by both SR-CNN and U-net models showed substantial improvement in image quality. Compared to original CBCT, the augmented CBCT images also achieve much higher PSNR and SSIM in quantitative evaluation. U-net demonstrated better performance than SR-CNN in quantitative evaluation and computational speed for CBCT image quality augmentation.
With the scatter correction in CBCT projections predicted by U-net, the scatter-corrected CBCT images demonstrated substantial improvement of the image contrast and anatomical details compared to the original CBCT images.
Conclusion: The proposed deep learning models can effectively augment CBCT image quality by correcting artifacts and reducing scatter. Given their relatively fast computational speeds and great performance, they can potentially become valuable tools to substantially enhance the quality of CBCT to improve its precision for target localization and adaptive radiotherapy.
Item Open Access Correlated Polarity Noise Reduction: Development, Analysis, and Application of a Novel Noise Reduction Paradigm(2013) Wells, Jered RImage noise is a pervasive problem in medical imaging. It is a property endemic to all imaging modalities and one especially familiar in those modalities that employ ionizing radiation. Statistical uncertainty is a major limiting factor in the reduction of ionizing radiation dose; patient exposure must be minimized but high image quality must also be achieved to retain the clinical utility of medical images. One way to achieve the goal of radiation dose reduction is through the use of image post processing with noise reduction algorithms. By acquiring images at lower than normal exposure followed by algorithmic noise reduction, it is possible to restore image noise to near normal levels. However, many denoising algorithms degrade the integrity of other image quality components in the process.
In this dissertation, a new noise reduction algorithm is investigated: Correlated Polarity Noise Reduction (CPNR). CPNR is a novel noise reduction technique that uses a statistical approach to reduce noise variance while maintaining excellent resolution and a "normal" noise appearance. In this work, the algorithm is developed in detail with the introduction of several methods for improving polarity estimation accuracy and maintaining the normality of the residual noise intensity distribution. Several image quality characteristics are assessed in the production of this new algorithm including its effects on residual noise texture, residual noise magnitude distribution, resolution effects, and nonlinear distortion effects. An in-depth review of current linear methods for medical imaging system resolution analysis will be presented along with several newly discovered improvements to existing techniques. This is followed by the presentation of a new paradigm for quantifying the frequency response and distortion properties of nonlinear algorithms. Finally, the new CPNR algorithm is applied to computed tomography (CT) to assess its efficacy as a dose reduction tool in 3-D imaging.
It was found that the CPNR algorithm can be used to reduce x ray dose in projection radiography by a factor of at least two without objectionable degradation of image resolution. This is comparable to other nonlinear image denoising algorithms such as the bilateral filter and wavelet denoising. However, CPNR can accomplish this level of dose reduction with few edge effects and negligible nonlinear distortion of the anatomical signal as evidenced by the newly developed nonlinear assessment paradigm. In application to multi-detector CT, XCAT simulations showed that CPNR can be used to reduce noise variance by 40% with minimal blurring of anatomical structures under a filtered back-projection reconstruction paradigm. When an apodization filter was applied, only 33% noise variance reduction was achieved, but the edge-saving qualities were largely retained. In application to cone-beam CT for daily patient positioning in radiation therapy, up to 49% noise variance reduction was achieved with as little as 1% reduction in the task transfer function measured from reconstructed data at the cutoff frequency.
This work concludes that the CPNR paradigm shows promise as a viable noise reduction tool which can be used to maintain current standards of clinical image quality at almost half of normal radiation exposure This algorithm has favorable resolution and nonlinear distortion properties as measured using a newly developed set of metrics for nonlinear algorithm resolution and distortion assessment. Simulation studies and the initial application of CPNR to cone-beam CT data reveal that CPNR may be used to reduce CT dose by 40%-49% with minimal degradation of image resolution.
Item Open Access Development and Application of Patient-Informed Metrics of Image Quality in CT(2020) Smith, Taylor BruntonThe purpose of this dissertation was to develop methods of measuring patient-specific image quality in computed tomography. The methods developed in this dissertation enable noise power spectrum, low contrast resolution, and ultimately a detectability index to be measured in a patient-specific manner. The project is divided into three part: 1) demonstrating the utility of currently developed patient-specific measures of image quality, 2) developing a method to estimate noise power spectrum and low contrast task transfer function from patient images, 3) and applying the extended metrology to the calculation of a patient-specific and task-specific detectability index of the future.In part 1, (chapters 2 and 3) the value of patient-specific image quality is demonstrated in two ways. First, patient-specific measures of noise magnitude and high-contrast resolution were deployed on a broad clinical dataset of chest and abdomen-pelvis exams. Image quality and dose were measured for 87,629 cases across 97 medical facilities, and variability in each outcome are reported. Such measurements of variability would be impossible in a phantom-derived image quality paradigm. Secondly, patient-specific measures of noise magnitude and high-contrast resolution were combined with a phantom-derived noise power spectrum to yield a detectability index. The hybrid (patient, and phantom-derived) detectability index was measured and retrospectively compared to the results of a detection observer study. The results show that the measured hybrid detectability index is shown to be correlated with human observer detection performance, further demonstrating the value of measuring patient-specific image quality. In part 2, (chapters 4 and 5) two image quality aspects are extended from a phantom-derived to a patient-specific paradigm. In chapter 4, a method to measure noise power spectrum from patient images is developed and validated using virtual imaging trial and physical phantom data. The method is applied to unseen clinical cases to demonstrate its feasibility, and the method’s sensitivity to expected trends across image reconstructions. Since the method relies on a sufficient area within the patient’s liver to make a measurement, the sensitivity of measurement accuracy of the method region size is assessed. Results show that the measurements can be accurate with as few as 106 included pixels, and that measurements are sensitive to ground truth differences in reconstruction algorithm. In chapter 5, a method to measure low contrast resolution from patient images is developed and validated using low contrast insert phantom scans. The method uses a support vector machine to learn the connection between the patient-specific noise power spectrum measured in chapter 4 and the low contrast task transfer function. The estimation method is compared to clinical alternative and results show that it is more accurate on the basis of RMSE for iterative reconstructions (especially high strength reconstructions). In part 3, (chapter 6 and appendix section 8.1) the developed patient-specific image quality metrology are applied to calculated fully patient-specific detectability index. Here, patient-specific image quality measures are re-applied to the detectability index calculations from chapter 3, converting the calculations from a hybrid method to a fully patient-specific method. To do so, the patient-specific noise power spectrum estimates from chapter 4 were combined with the patient-specific low contrast task transfer functions from chapter 5 to inform the detectability index calculations. The purpose of this chapter was to show the positive impact of measuring a task-based measure of image quality in a fully patient-specific paradigm. The results show that the fully patient-specific detectability index show a statistically significant improvement in its relation with human detection accuracy over the hybrid measurements. This section also served as an indirect validation methodologies in chapters 4 and 5. Finally, all patient-specific measures are deployed over a variety of clinical cases to demonstrate feasibility of using the methods to monitor image quality. In conclusion, this dissertation developed methods to assess task based and task generic image quality directly from patient images, and demonstrated the utility and value of patient-specific image quality assessment.
Item Open Access Development of a Method to Detect and Quantify CT Motion Artifacts: Feasibility Study(2022) Khandekar, MadhuraArtifacts are known to reduce the quality of CT images and can affect statistical analysis and quantitative utility of those images. Motion artifact is a leading type of CT artifact, due to either voluntary or involuntary (respiratory and cardiac movements) causes. Currently, such artifacts, if present, are not quantified and monitored, nor are their dependencies on CT acquisition settings known. As a first step to address this gap, the aim of this study was to develop a neural network to detect and quantify motion artifacts in CT images. Training data were drawn from three sources and the pixels containing motion were segmented (Seg3D, University of Utah) and the segmentation masks used as the ground truth labels. A convolutional neural network (u-net) was trained to identify pixels containing motion. The model performance was assessed by correlating the percentage of voxels labeled as having motion in each slice of the pre-allocated testing data for the ground-truth and predicted segmentation masks, yielding a correlation coefficient of r = 0.43, as well as constructing ROC curves. A series-wise ROC curve had AUC = 0.94, and a slice-wise ROC curve had AUC = 0.80. The correlation coefficient and AUCs are expected to improve as more training data is added. This network has potential to be a useful clinical tool, enabling quality tracking systems to detect and quantify the presence of artifacts in the context of CT quality control.
Item Open Access Minimum Detectability and Dose Analysis for Size-based Optimization of CT Protocols(2014) Smitherman, Christopher CraigPurpose: To develop a comprehensive model of task-based performance of CT across a broad library of CT protocols, so that radiation dose and image quality can be optimized within a large multi-vendor clinical facility.
Methods: 80 adult CT protocols from the Duke University Medical Center were grouped into 23 protocol groups with similar acquisition characteristics. A size-based image quality phantom (Duke Mercury Phantom 2.0) was imaged using these protocol groups for a range of clinically relevant dose levels on two CT manufacturer platforms (Siemens SOMATOM Definition Flash and GE CT750 HD). For each protocol group, phantom size, and dose level, the images were analyzed to extract task-based image quality metrics, the task transfer function (TTF) and the noise power spectrum (NPS). The TTF and NPS were further combined with generalized models of lesion task functions to predict the detectability of the lesions in terms of areas under the receiver operating characteristic curve (Az). A graphical user interface (GUI) was developed to present Az as a function of lesion size and contrast, dose, patient size, and protocol, as well as to derive the necessary dose to achieve a detection threshold for a targeted lesion.
Results: The GUI provided the prediction of Az values modeling detection confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 indicated a dose requirement of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm and 50-HU lesion contrast, the minimum detected lesion size at those dose levels were predicted to be 8.4, 5.0, and 3.9 mm, respectively.
Conclusions: A CT protocol optimization platform was developed by combining task-based detectability calculations with a GUI that demonstrates the tradeoff between dose and image quality. The platform can be used to improve individual protocol dose efficiency, as well as to improve protocol consistency across various patient diameters and CT scanners. The GUI can further be used to calculate personalized dose for individualized examination tasks.
Item Open Access Parameterizing Image Quality of TOF versus Non-TOF PET as a Function of Body Size(2011) Wilson, Joshua MarkPositron emission tomography (PET) is a nuclear medicine diagnostic imaging exam of metabolic processes in the body. Radiotracers, which consist of positron emitting radioisotopes and a molecular probe, are introduced into the body, emitted radiation is detected, and tomographic images are reconstructed. The primary clinical PET application is in oncology using a glucose analogue radiotracer, which is avidly taken up by some cancers.
It is well known that PET performance and image quality degrade as body size increases, and epidemiological studies over the past two decades show that the adult US population's body size has increased dramatically and continues to increase. Larger patients have more attenuating material that increases the number of emitted photons that are scattered or absorbed within the body. Thus, for a fixed amount of injected radioactivity and acquisition duration, the number of measured true coincidence events will decrease, and the background fractions will increase. Another size-related factor, independent of attenuation, is the volume throughout which the measured coincidence counts are distributed: for a fixed acquisition duration, as the body size increases, the counts are distributed over a larger area. This is true for both a fixed amount of radioactivity, where the concentration decreases as size increases, and a fixed concentration, where the amount radioactivity increases with size.
Time-of-flight (TOF) PET is a recently commercialized technology that allows the localization, with a certain degree of error, of a positron annihilation using timing differences in the detection of coincidence photons. Both heuristic and analytical evaluations predict that TOF PET will have improved performance and image quality compared to non-TOF PET, and this improvement increases as body size increases. The goal of this dissertation is to parameterize the image quality improvement of TOF PET compared to non-TOF PET as a function of body size. Currently, no standard for comparison exists.
Previous evaluations of TOF PET's improvement have been made with either computer-simulated data or acquired data using a few discrete phantom sizes. A phantom that represents a range of attenuating dimensions, that can have a varying radioactivity distribution, and that can have radioactive inserts positioned throughout its volume would facilitate characterizing PET system performance and image quality as a function of body size. A fillable, tapered phantom, was designed, simulated, and constructed. The phantom has an oval cross-section ranging from 38.5 × 49.5 cm to 6.8 × 17.8 cm, a length of 51.1 cm, a mass of 6 kg (empty), a mass of 42 kg (water filled), and 1.25-cm acrylic walls.
For this dissertation research, PET image quality was measured using multiple, small spheres with diameters near the spatial resolution of clinical whole-body PET systems. Measurements made on a small sphere, which typically include a small number of image voxels, are susceptible to fluctuations over the few voxels, so using multiple spheres improves the statistical power of the measurements that, in turn, reduces the influence of these fluctuations. These spheres were arranged in an array and mounted throughout the tapered phantom's volume to objectively measure image quality as a function of body size. Image quality is measured by placing regions of interest on images and calculating contrast recovery, background variability, and signal to noise ratio.
Image quality as a function of body size was parameterized for TOF compared to non-TOF PET using 46 1.0-cm spheres positioned in six different body sizes in a fillable, tapered phantom. When the TOF and non-TOF PET images were reconstructed for matched contrast, the square of the ratio of the images' signal-to-noise ratios for TOF to non-TOF PET was plotted as a function, f(D), of the radioactivity distribution size, D, in cm. A linear regression was fit to the data: f(D) = 0.108D - 1.36. This was compared to the ratio of D and the localization error, σd, based on the system timing resolution, which is approximately 650 ps for the TOF PET system used for this research. With the image quality metrics used in this work, the ratio of TOF to non-TOF PET fits well to a linear relationship and is parallel to D/σd. For D < 20 cm, there is no image quality improvement, but for radioactivity distributions D > 20 cm, TOF PET improves image quality over non-TOF PET. PET imaging's clinical use has increased over the past decade, and TOF PET's image quality improvement for large patients makes TOF an important new technology because the occurrence of obesity in the US adult population continues to increase.
Item Open Access Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance(2016) Tian, XiaoyuX-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Item Open Access Scatter Correction for Dual-source Cone-beam CT Using the Pre-patient Grid(2014) Chen, YingxuanPurpose: A variety of cone beam CT (CBCT) systems has been used in the clinic for image guidance in interventional radiology and radiation therapy. Compared with conventional single-source CBCT, dual-source CBCT has the potential for dual-energy imaging and faster scanning. However, it adds additional cross-scatter when compared to a single-source CBCT system, which degrades the image quality. Previously, we developed a synchronized moving grid (SMOG) system to reduce and correct scatter for a single-source CBCT system. The purpose of this work is to implement the SMOG system on a prototype dual-source CBCT system and to investigate its efficacy in scatter reduction and correction under various imaging acquisition settings.
Methods:A 1-D grid was attached to each x-ray source during dual-source CBCT imaging to acquire partially blocked projections. As the grid partially blocked the x-ray primary beams and divided it into multiple quasi-fan beams during the scan, it produced a physical scatter reduction effect in the projections. Phantom data were acquired in the unblocked area, while scatter signal was measured from the blocked area in projections. The scatter distribution was estimated from the measured scatter signals using a cubic spline interpolation for post-scan scatter correction. Complimentary partially blocked projections were acquired at each scan angle by positioning the grid at different locations, and were merged to obtain full projections for reconstruction. In this study, three sets of CBCT images were reconstructed from projections acquired: (a) without grid, (b) with grid but without scatter correction, and (c) with grid and with scatter correction to evaluate the effects of scatter reduction and scatter correction on artifact reduction and improvements of contrast-to-noise ratio index (CNR') and CT number accuracy. The efficacy of the scatter reduction and correction method was evaluated for CATphan phantoms of different diameters (15cm, 20cm, and 30cm), different grids (grid blocking ratios of 1:1 and 2:1), different acquisition modes (simultaneous: two tubes firing at the same time, interleaved: tube alternatively firing and sequential: only one tube firing in one rotation) and different reconstruction algorithms (iterative reconstruction method vs Feldkamp, Davis, and Kress (FDK) back projection method).
Results: The simultaneous scanning mode had the most severe scatter artifacts and the most degraded CNR' when compared to either the interleaved mode or the sequential mode. This is due to the cross-scatter between the two x-ray sources in the simultaneous mode. Scatter artifacts were substantially reduced by scatter reduction and correction. CNR's of the different inserts in the CATphan were enhanced on average by 24%, 13%, and 33% for phantom sizes of 15cm, 20cm, and 30cm, respectively, with only scatter reduction and a 1:1 grid. Correspondingly, CNR's were enhanced by 34%, 18%, and 11%, respectively, with both scatter reduction and correction. However, CNR' may decrease with scatter correction alone for the larger phantom and low contrast ROIs, because of an increase in noise after scatter correction. In addition, the reconstructed HU numbers were linearly correlated to nominal HU numbers. A higher grid blocking ratio, i.e. with a greater blocked area, resulted in better scatter artifact removal and CNR' improvement at the cost of complexity and increased number of exposures. Iterative reconstruction with total variation regularization resulted in better noise reduction and enhanced CNR', in comparison to the FDK method.
Conclusion:Our method with a pre-patient grid can effectively reduce the scatter artifacts, enhance CNR', and modestly improve the CT number linearity for the dual-source CBCT system. The settings such as grid blocking ratio and acquisition mode can be optimized based on the patient-specific condition to further improve image quality.
Item Open Access Spatial Coherence-Based Adaptive Acoustic Output Selection for Diagnostic Ultrasound(2022) Flint, Katelyn MaureenThe US Food and Drug Administration (FDA) provides guidelines for maximum acoustic output for diagnostic ultrasound imaging through metrics such as intensity, Mechanical Index (MI), and Thermal Index (TI). However, even within these guideline values, if the acoustic exposure levels used do not benefit image quality, they represent an unnecessary risk to patient safety. Ultrasound users have control over many settings, including ones that directly and indirectly change the acoustic output, and the user is largely responsible for deciding how to manage the safety risks based on on-screen displays of MI and TI. The FDA and professional societies advise users to observe the ALARA (as low as reasonably achievable) principle with regard to acoustic exposure, but several studies have shown that the majority of ultrasound users do not monitor safety indices. To address this discrepancy, an adaptive ultrasound method has been developed that could be used to automatically adjust acoustic exposure in real-time in response to image quality feedback.
In this work, MI was used as the measure of acoustic output, and lag-one coherence (LOC) was the image quality feedback parameter. LOC is the average spatial correlation between backscattered echoes received on neighboring ultrasound transducer array elements. Previous work has shown that LOC is predictive of local signal-to-noise ratio (SNR), and that it is sensitive to incoherent acoustic clutter and temporally-incoherent noise. During B-mode ultrasound imaging, LOC was monitored as MI was adjusted, and the data consistently formed a sigmoid shape. At lower MI values, LOC increased quickly with increasing output, but at higher MI values, increases in acoustic output often did not translate to increased image quality. This relationship was consistent for other image quality metric-versus-MI data, including contrast, contrast-to-noise ratio (CNR), and generalized contrast-to-noise ratio (gCNR).
The MI value at which the LOC began to approach an asymptote was denoted the "ALARA MI.” In this work, ALARA MI values were calculated for a range of obstetric imaging targets that are scanned during anatomy exams, including placenta, fetal abdomen, heart, kidney, bladder, stomach, ventricles, and extremities. The placenta data had the lowest median ALARA MI (0.59) and the fetal heart data had the highest (0.83). There was considerable variation in the ALARA MI values, even for the same participant, so frequent updates to the acoustic output settings would be recommended during live scanning. Additionally, the correlation between the ALARA MI and the LOC achieved at that setting was found to be very weak.
Initially, a fixed region of interest (ROI) was used for acoustic output optimization. This would require the structure to be aligned with the ROI and the optimization process to be manually initiated. Considering the demands on the sonographer during clinical ultrasound scanning, it would not be feasible to add these steps every time a new imaging window is used. An automated ROI-selection algorithm was developed that would allow the entire adaptive acoustic output selection process to happen without user input. This algorithm used envelope-detected B-mode image data that are readily available on clinical scanners to identify where to perform the optimization. Testing on clinical placenta and fetal abdomen data showed that it reliably recommended good regions for acoustic output optimization.
The results of this work suggest that near-maximum image quality can be achieved with a lower acoustic output level than is currently used clinically, and automated acoustic output adjustments could enable more consistent observation of the ALARA principle. In the future, this could be extended to other ultrasound modes, such as Doppler imaging, and additional acoustic output metrics could be incorporated.
Preliminary assessment of temporal SNR was performed, and a wide range of temporal SNR levels is associated with the ALARA MI settings found in this study. Future work may also investigate using a temporal SNR threshold to determine the ALARA output level. Spatial coherence measurements, such as LOC, reflect the degradation in image quality from acoustic clutter and electronic noise, and temporal coherence is affected by motion and electronic noise. Although motion is an important factor in clinical imaging, temporal coherence does not require access to channel data, so these calculations would be easier to implement on existing scanners. These trade-offs are important to consider when attempting to capture the underlying electronic noise level to inform an automated ALARA ultrasound system.