Browsing by Subject "Image processing"
Results Per Page
Sort Options
Item Open Access Application of Stochastic Processes in Nonparametric Bayes(2014) Wang, YingjianThis thesis presents theoretical studies of some stochastic processes and their appli- cations in the Bayesian nonparametric methods. The stochastic processes discussed in the thesis are mainly the ones with independent increments - the Levy processes. We develop new representations for the Levy measures of two representative exam- ples of the Levy processes, the beta and gamma processes. These representations are manifested in terms of an infinite sum of well-behaved (proper) beta and gamma dis- tributions, with the truncation and posterior analyses provided. The decompositions provide new insights into the beta and gamma processes (and their generalizations), and we demonstrate how the proposed representation unifies some properties of the two, as these are of increasing importance in machine learning.
Next a new Levy process is proposed for an uncountable collection of covariate- dependent feature-learning measures; the process is called the kernel beta process. Available covariates are handled efficiently via the kernel construction, with covari- ates assumed observed with each data sample ("customer"), and latent covariates learned for each feature ("dish"). The dependencies among the data are represented with the covariate-parameterized kernel function. The beta process is recovered as a limiting case of the kernel beta process. An efficient Gibbs sampler is developed for computations, and state-of-the-art results are presented for image processing and music analysis tasks.
Last is a non-Levy process example of the multiplicative gamma process applied in the low-rank representation of tensors. The multiplicative gamma process is applied along the super-diagonal of tensors in the rank decomposition, with its shrinkage property nonparametrically learns the rank from the multiway data. This model is constructed as conjugate for the continuous multiway data case. For the non- conjugate binary multiway data, the Polya-Gamma auxiliary variable is sampled to elicit closed-form Gibbs sampling updates. This rank decomposition of tensors driven by the multiplicative gamma process yields state-of-art performance on various synthetic and benchmark real-world datasets, with desirable model scalability.
Item Open Access Automatic Identification of Training & Testing Data for Buried Threat Detection using Ground Penetrating Radar(2017) Reichman, DanielGround penetrating radar (GPR) is one of the most popular and successful sensing modalities that has been investigated for landmine and subsurface threat detection. The radar is attached to front of a vehicle and collects measurements on the path of travel. At each spatial location queried, a time-series of measurements is collected, and then the measured set of data are often visualized as images within which the signals corresponding to buried threats exhibit a characteristic appearance. This appearance is typically hyperbolic and has been leveraged to develop several automated detection methods. Many of the detection methods applied to this task are supervised, and therefore require labeled examples of threat and non-threat data for training. Labeled examples are typically obtained by collecting data over deliberately buried threats at known spatial locations. However, uncertainty exists with regards to the temporal locations in depth at which the buried threat signal exists in the imagery. This uncertainty is an impediment to obtaining labeled examples of buried threats to provide to the supervised learning model. The focus of this dissertation is on overcoming the problem of identifying training data for supervised learning models for GPR buried threat detection.
The ultimate goal is to be able to apply the lessons learned in order to improve the performance of buried threat detectors. Therefore, a particular focus of this dissertation is to understand the implications of particular data selection strategies, and to develop principled general strategies for selecting the best approaches. This is done by identifying three factors that are typically considered in the literature with regards to this problem. Experiments are conducted to understand the impact of these factors on detection performance. The outcome of these experiments provided several insights about the data that can help guide the future development of automated buried threat detectors.
The first set of experiments suggest that a substantial number of threat signatures are neither hyperbolic nor regular in their appearance. These insights motivated the development of a novel buried threat detector that improves over the state-of-the-art benchmark algorithms on a large collection of data. In addition, this newly developed algorithm exhibits improved characteristics of robustness over those algorithms. The second set of experiments suggest that automating the selection of data corresponding to the buried threats is possible and can be used to replace manually designed methods for this task.
Item Open Access Automatic Volumetric Analysis of the Left Ventricle in 3D Apical Echocardiographs(2015) Wald, Andrew JamesApically-acquired 3D echocardiographs (echoes) are becoming a standard data component in the clinical evaluation of left ventricular (LV) function. Ejection fraction (EF) is one of the key quantitative biomarkers derived from echoes and used by echocardiographers to study a patient's heart function. In present clinical practice, EF is either grossly estimated by experienced observers, approximated using orthogonal 2D slices and Simpson's method, determined by manual segmentation of the LV lumen, or measured using semi-automatic proprietary software such as Philips QLab-3DQ. Each of these methods requires particular skill by the operator, and may be time-intensive, subject to variability, or both.
To address this, I have developed a novel, fully automatic method to LV segmentation in 3D echoes that offers EF calculation on clinical datasets at the push of a button. The solution is built on a pipeline that utilizes a number of image processing and feature detection methods specifically adopted to the 3D ultrasound modality. It is designed to be reasonably robust at handling dropout and missing features typical in clinical echocardiography. It is hypothesized that this method can displace the need for sonographer input, yet provide results statistically indistinguishable from those of experienced sonographers using QLab-3DQ, the current gold standard that is employed at Duke University Hospital.
A pre-clinical validation set, which was also used for iterative algorithm development, consisted of 70 cases previously seen at Duke. Of these, manual segmentations of 7 clinical cases were compared to the algorithm. The final algorithm predicts EF within ± 0.02 ratio units for 5 of them, and ± 0.09 units for the remaining 2 cases, within common clinical tolerance. Another 13 of the cases, often used for sonographer training and rated as having good image quality, were analyzed using QLab-3DQ, in which 11 cases showed concordance (± 0.10) with the algorithm. The remaining 50 cases retrospectively recruited at Duke and representative of everyday image quality showed 62% concordance (± 0.10) of QLab-3DQ with the algorithm. The fraction of concordant cases is highly dependent on image quality, and concordance improves greatly upon disqualification of poor quality images. Visual comparison of the QLab-3DQ segmentation to my algorithm overlaid on top of the original echoes also suggests that my method may be preferable or of high utility even in cases of EF discordance. This paper describes the algorithm and offers justifications for the adopted methods. The paper also discusses the design of a retrospective clinical trial now underway at Duke with 60 additional unseen cases intended only for independent validation.
Item Embargo Computational methods for high-resolution structure determination of macromolecular complexes imaged in situ using cryo-electron tomography(2024) Liu, Hsuan-FuCryo-electron tomography (CET) combined with sub-volume averaging (SVA) is a powerful imaging technique for determining macromolecular structures in situ. To resolve structures at high-resolution, large numbers of volumes containing copies of the protein of interest are aligned and averaged in three dimensions. Using this strategy, the structures of highly ordered virus capsid proteins and large ribosomes have been resolved at near-atomic resolution. However, CET studies of proteins of lower molecular weight (<1000 kDa) or targets present in their crowded native context have been limited to sub-nanometer resolutions. This is due to limitations in the accuracy of image alignment resulting from the low image contrast generated by the smaller scattering masses and the presence of overlapping objects in the cellular environment. While recent advances in high-throughput tomography that use beam image-shift accelerated data acquisition (BISECT) allow producing enough data for SVA, demanding storage and processing requirements associated with analyzing large numbers of particles often make structure determination impractical. The overarching goal of this thesis is to build a computational framework for CET/SVA structural determination that streamlines and extends the applicability of the technique to a wider class of biomedically-relevant targets while improving the resolution of structures to near-atomic resolution. To achieve this, this thesis focuses on: (1) filling gaps in the current CET data analysis workflow by designing a comprehensive end-to-end platform for SVA, (2) improving the resolution of structures by developing methods for improved alignment of protein images and better extraction of high-resolution information, and (3) validating our workflows by determining low-molecular weight structures and native membrane-bound proteins at near-atomic resolution. To routinely convert raw tilt-series into high-resolution structures, we developed high-throughput data collection approach, implemented robust strategies for tilt-series alignment and particle picking, and designed a scalable platform for distributed image analysis that makes analysis of large datasets feasible. To improve resolution, we used a constrained image alignment approach that uses parameters from the tilt geometry to overcome the low contrast and crowdedness of tomographic data. In addition, we efficiently recovered high-resolution signal contained in the raw data using per-tilt CTF correction and data-driven exposure weighting. These advances allowed the structure determination of low-molecular weight complexes such as dGTPase (300-kDa) and of immature human endogenous retrovirus K (HERV-K) Gag and immature human immunodeficiency virus 1 (HIV-1) Gag at near-atomic resolution. Our methods for CET/SVA allowed routine determination of structures of biomedically important targets both in-vitro and in situ at high enough resolution to elucidate mechanistic details governing virus assembly and infection. These advances will represent an important step towards closing the resolution gap between high-resolution strategies used to study molecular assemblies reconstituted in-vitro and techniques for in situ structure determination.
Item Open Access Development of acoustofluidic scanning nanoscope(2022) Jin, GeonsooThe largest obstacle in nanoscale microscopy is the diffraction limit. Although several means of achieving sub-diffraction resolution exist, they all have shortcomings such as cost, complexity, and processing time, which make them impractical for widespread use. Additionally, these technologies struggle to find a balance between a high resolution and a large field of view. In this introduction of dissertation, we provide an overview of various microsphere based super resolution techniques that address the shortcomings of existing platforms and consistently achieve sub-diffraction resolutions. Initially, the theoretical basis of photonic nanojets, which make microsphere based super resolution imaging possible, are discussed. In the following sections, different type of acoustofluidic scanning techniques and intelligent nanoscope are explored. The introduction concludes with an emphasis on the limitless potential of this technology, and the wide range of possible biomedical applications.First, we have documented the development of an acoutofluidic scanning nanoscope that can achieve both high resolution and large field of view at the same time, which alleviates a long-existing shortcoming of conventional microscopes. The acoutofluidic scanning nanoscope developed here can serve as either an add-on component to expand the capability of a conventional microscope, or could be paired with low-cost imaging platforms to develop a stand-alone microscope for portable imaging. The acoutofluidic scanning nanoscope achieves high-resolution imaging without the need for conventional high-cost and bulky objectives with high numerical apertures. The field of view of the acoutofluidic scanning nanoscope is much larger than that from a conventional high numerical aperture objective lens, and it is able to achieve the same resolving power. The acoutofluidic scanning nanoscope automatically focuses and maintains a constant working distance during the scanning process thanks to the interaction of the microparticles with the liquid domain. The resolving power of the acoutofluidic scanning nanoscope can easily be adjusted by using microparticles of different sizes and refractive indices. Additionally, it may be possible to further improve the performance of this platform by exploring additional microparticle sizes and materials, in combination with various objectives. Altogether, we believe this acoutofluidic scanning nanoscope has potential to be integrated into a wide range of applications from portable nano-detection to biomedicine and microfluidics. Next, we developed a dual-camera acoustofluidic nanoscope with a seamless image merging algorithm (alpha blending process). This design allows us to precisely image both the sample and the microspheres simultaneously and accurately track the particle path and location. Therefore, the number of images required to capture the entire field of view (200 × 200 μm) by using our acoustofluidic scanning nanoscope is reduced by 55-fold compared with previous designs. Moreover, the image quality is also greatly improved by applying an alpha blending imaging technique, which is critical for accurately depicting and identifying nanoscale objects or processes. This dual-camera acoustofluidic nanoscope paves the way for enhanced nanoimaging with high resolution and a large field of view. Next, we developed an acoustofluidic scanning nanoscope via fluorescence amplification technique. Nanoscale fluorescence signal amplification is a significant feature for many biomedical and cell biology research area. Different types of fluorescence amplification techniques were studied; however, those technologies still need a complex process and rely on an elaborate optical system. To conquer these limitations, we developed an acoustofluidic scanning nanoscope via fluorescence amplification with hard PDMS membrane technique. The microsphere magnification by photonic nanojets effect with the hard PDMS could deliver certain focal distance to maximize the amplification. Moreover, a bidirectional acoustofluidic scanning device with an image processing also developed to perform 2D scanning of large field of view area. In the image processing procedure, we applied a correction of lens distortion to provide a restored distortion image. This fluorescence amplification via acoustofluidic nanoscope allow us to afford a nanoscale fluorescence imaging. Next, we developed an intelligent nanoscope that combines machine learning and microsphere array-based imaging to: (1) surpass the diffraction limit of the microscope objective with microsphere imaging to provide high-resolution images; (2) provide large field-of-view imaging without the sacrifice of resolution by utilizing a microsphere array; and (3) rapidly classify nanomaterials using a deep convolution neural network. The intelligent nanoscope delivers more than 46 magnified images from a single image frame so that we collected more than 1,000 images within 2 seconds. Moreover, the intelligent nanoscope achieves a 95% nanomaterial classification accuracy using 1,000 images of training sets, which is 45% more accurate than without the microsphere array. The intelligent nanoscope also achieves a 92% bacteria classification accuracy using 50,000 images of training sets, which is 35% more accurate than without the microsphere array. This platform accomplished rapid, accurate detection and classification of nanomaterials with miniscule size differences. The capabilities of this device wield the potential to further detect and classify smaller biological nanomaterial, such as viruses or extracellular vesicles. Lastly, this chapter serves a conclusion. Here, I discuss current issues regarding the acoustofluidic scanning nanoscope across review the current limitations of the technology and give suggestions for different direction of microsphere imaging. Moreover, I provide my perspective on the future development of acoustofluidic scanning nanoscope and potential new applications. I discuss how the technologies developed in this dissertation can be improved and applied to new applications in nanoimaging.
Item Open Access Exploiting Multi-Look Information for Landmine Detection in Forward Looking Infrared Video(2013) Malof, JordanForward Looking Infrared (FLIR) cameras have recently been studied as a sensing modality for use in landmine detection systems. FLIR-based detection systems benefit from larger standoff distances and faster rates of advance than other sensing modalities, but they also present significant challenges for detection algorithm design. FLIR video typically yields multiple looks at each object in the scene, each from a different camera perspective. As a result each object in the scene appears in multiple video frames, and each time at a different shape and size. This presents questions about how best to utilize such information. Evidence in the literature suggests such multi-look information can be exploited to improve detection performance but, to date, there has been no controlled investigation of multi-look information in detection. Any results are further confounded because no precise definition exists for what constitutes multi-look information. This thesis addresses these problems by developing a precise mathematical definition of "a look", and how to quantify the multi-look content of video data. Controlled experiments are conducted to assess the impact of multi-look information on FLIR detection using several popular detection algorithms. Based on these results two novel video processing techniques are presented, the plan-view framework and the FLRX algorithm, to better exploit multi-look information. The results show that multi-look information can have a positive or negative impact on detection performance depending on how it is used. The results also show that the novel algorithms presented here are effective techniques for analyzing video and exploiting any multi-look information to improve detection performance.
Item Open Access Micro-Anatomical Quantitative Imaging Towards Enabling Automated Diagnosis of Thick Tissues at the Point of Care(2015) Mueller, Jenna Lynne HookHistopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.
Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.
To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.
To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.
Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.
Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.
Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.
Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.
In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.
Item Open Access Nonparametric Bayesian Models for Joint Analysis of Imagery and Text(2014) Li, LingboIt has been increasingly important to develop statistical models to manage large-scale high-dimensional image data. This thesis presents novel hierarchical nonparametric Bayesian models for joint analysis of imagery and text. This thesis consists two main parts.
The first part is based on single image processing. We first present a spatially dependent model for simultaneous image segmentation and interpretation. Given a corrupted image, by imposing spatial inter-relationships within imagery, the model not only improves reconstruction performance but also yields smooth segmentation. Then we develop online variational Bayesian algorithm for dictionary learning to process large-scale datasets, based on online stochastic optimization with a natu- ral gradient step. We show that dictionary is learned simultaneously with image reconstruction on large natural images containing tens of millions of pixels.
The second part applies dictionary learning for joint analysis of multiple image and text to infer relationship among images. We show that feature extraction and image organization with annotation (when available) can be integrated by unifying dictionary learning and hierarchical topic modeling. We present image organization in both "flat" and hierarchical constructions. Compared with traditional algorithms feature extraction is separated from model learning, our algorithms not only better fits the datasets, but also provides richer and more interpretable structures of image
Item Open Access Retrospective Dosimetric Analysis of Occurrence of Radiation Pneumonitis(2021) Zhou, BanghaoPurpose: To retrospectively evaluate the impact of dosimetric parameters in treatment planning and dose discrepancies from patient inter-fractional motion on the high radiation pneumonitis (RP) occurrence rate in breast cancer patients receiving radiotherapy at First People's Hospital of Kunshan associated with Duke Kunshan University Medical Physics Graduate Program.Method: Dose-volume parameters were extracted from breast cancer patients’ treatment plans and were compared with corresponding experience-based thresholds associated with RP, including total dose, mean lung dose (MLD), percent of lung volume that receives a dose of 5 Gy or higher (V5), 13 Gy or higher (V13), 20 Gy or higher (V20), and 30 Gy or higher (V30). In addition, an in-house dose calculation system based on MATLAB and Varian Eclipse treatment planning system (TPS) was used to obtain actual dose distributions based on planning computed tomography (CT) scans, cone-beam computed tomography (CBCT) scans and couch shifts. The calculated actual dose was analyzed and compared to the original planning dose to evaluate inter-fractional motion induced dose discrepancies and their impacts on the occurrence of RP. Result: For patients diagnosed with RP, the median MLD is 15.38 Gy and the median V20 is 25.6%, which are higher than corresponding constraints 14 Gy and 24% respectively. Other dose-volume parameters were also much higher than their corresponding constraints for preventing RP. The inter-fractional patient motion induced discrepancies between planning and actual dose-volume parameters. For Patient 1, V20 increases from 23.93% to 28.33% due to the motion, which exceeds the V20 constraint of 24%. V30 increases to 17.99%, which is very close to the V30 constraint of 18%. For Patient 2, V10 increases from 32.00% to 35.43% and V13 increases from 29.86% to 32.99% due to the motion, both becoming to exceed the constraints. For Patient 3, MLD, V10, V13, V20 and V30 all decrease, where MLD and V20 decrease to the values lower than constraints. Summary: Dose-volume parameters in breast cancer treatment plans at First People's Hospital of Kunshan were reviewed. Existing results show that the dose-volume parameters related to RP were higher than internationally recommended constraints, which contributes to the high RP incidence. In addition, an automated MATLAB-based actual dose calculation system was developed and used to analyze the dose discrepancies between planning and actual dose distributions. Inter-fractional patient motions were found to cause discrepancies between the original planning dose and the actual dose.
Item Open Access Single-Cell Analysis of Transcriptional Dynamics During Cell Cycle Arrest(2017) Winski, David J.In the past decade, a challenge to the canonical model of cell cycle transcriptional control has been posed by a series of high-throughput gene expression studies in budding yeast. Using genetic methods to inhibit or lock the activity of the cyclin-CDK/APC oscillator, these population studies demonstrated that a significant proportion of cell cycle transcription persists in the absence of cyclin-CDK/APC oscillations. To account for these findings, a network of serially activating transcription factors with sources of negative feedback from transcriptional repressors (referred to as a \say{TF network}) was proposed to drive cyclin-CDK/APC independent gene expression.
However, population studies of cell cycle gene expression are limited due to loss of phase synchrony that limits the timescale of measurement of gene expression and due to expression averaging that limits assessment of heterogeneity of expression within the population. To circumvent these limitations I used a single-cell timelapse microscopy approach to assess transcriptional dynamics of cell cycle regulated genes during extended cell cycle arrests in both the Gl/S and early mitosis (metaphase) phases of the cell cycle.
During G1/S arrest, transcriptional dynamics of four cell cycle regulated genes was assessed and activation of out-of-phase cell cycle transcription was observed in two of these genes. Though budding oscillations were observed in G1/S arrested cells, robust transcriptional oscillations were not seen for any of the four genes and budding dynamics were uncoupled from transcriptional dynamics after the first bud emergence. During cell cycle arrest in early mitosis, transcriptional dynamics of ten cell cycle regulated genes was assessed and activation of out-of-phase transcription was observed for four genes. All four genes activated once with canonical ordering but robust oscillations were not observed during mitotic arrest. Together these studies demonstrate activation, but not oscillation, of cell cycle transcription in the absence of cyclin-CDK/APC oscillations.
Item Open Access Stochastic Simulations for the Detection of Objects in Three Dimensional Volumes: Applications in Medical Imaging and Ocean Acoustics(2007-05-10T15:22:40Z) Shorey, Jamie MargaretGiven a known signal and perfect knowledge of the environment there exist few detection and estimation problems that cannot be solved. Detection performance is limited by uncertainty in the signal, an imperfect model, uncertainty in environmental parameters, or noise. Complex environments such as the ocean acoustic waveguide and the human anatomy are difficult to model exactly as they can differ, change with time, or are difficult to measure. We address the uncertainty in the model or parameters by incorporating their possibilities in our detection algorithm. Noise in the signal is not so easily dismissed and we set out to provide cases in which what is frequently termed a nuisance parameter might increase detection performance. If the signal and the noise component originate from the same system then it might be reasonable to assume that the noise contains information about the system as well. Because of the negative effects of ionizing radiation it is of interest to maximize the amount of diagnostic information obtained from a single exposure. Scattered radiation is typically considered image degrading noise. However it is also dependent on the structure of the medium and can be estimated using stochastic simulation. We describe a novel Bayesian approach to signal detection that increases performance by including some of the characteristics of the scattered signal. This dissertation examines medical imaging problems specific to mammography. In order to model environmental uncertainty we have written software to produce realistic voxel phantoms of the breast. The software includes a novel algorithm for producing three dimensional distributions of fat and glandular tissue as well as a stochastic ductal branching model. The image produced by a radiographic system cannot be determined analytically since the interactions of particles are a random process. We have developed a particle transport software package to model a complete radiographic system including a realistic x-ray spectrum model, an arbitrary voxel-based medium, and an accurate material library. Novel features include an efficient voxel ray tracing algorithm that reflects the true statistics of the system as well as the ability to produce separable images of scattered and direct radiation. Similarly, the ocean environment includes a high degree of uncertainty. A pressure wave propagating through a channel produces a measurable collection of multipath arrivals. By modeling changes in the pressure wave front we can estimate the expected pattern that appears at a given location. For this purpose we have created an ocean acoustic ray tracing code that produces time-domain multipath arrival patterns for arbitrary 3-dimensional environments. This iterative algorithm is based on a generalized recursive ray acoustics algorithm. To produce a significant gain in computation speed we model the ocean channel as a linear, time invariant system. It differs from other ocean propagation codes in that it uses time as the dependent variable and can compute sound pressure levels along a ray path effectively measuring the spatial impulse response of the ocean medium. This dissertation also investigates Bayesian approaches to source localization in a 3-D uncertain ocean environment. A time-domain-based optimal a posteriori probability bistatic source localization method is presented. This algorithm uses a collection of acoustic time arrival patterns that have been propagated through a 3-D acoustic model as the observable data. These replica patterns are collected for a possible range of unknown environmental parameters. Receiver operating characteristics for a bistatic detection problem are presented using both simulated and measured data.