Duke Student Scholarship
Permanent URI for this communityhttps://hdl.handle.net/10161/1
Browse
Browsing Duke Student Scholarship by Affiliation "Duke"
Now showing 1 - 20 of 53
- Results Per Page
- Sort Options
Item Open Access A Comparison of Values around Cruise Tax in Iceland and Alaska(2018) Stith, MichaelaCruise ships pose many environmental harms: they emit more black carbon and CO₂ per passenger-mile than any other vehicle, discharge untreated sewage and wastewater into the open ocean, carry large quantities of heavy fuel oil onboard, and transport invasive species via ballast water. As the Arctic Ocean melts and becomes more accessible to marine vessels, cruise lines have taken advantage of the “last chance tourism” phenomenon and increased the numbers of cruise ships that tour the Arctic. Without sufficient regulation, the influx of cruise ships could create negative impacts for the Arctic environment. In this study I use Alaska’s Cruise Ship Tax Initiative as a model for cruise regulation and examine the high-level values that would influence Icelanders to adopt a similar, explicitly environmental per-passenger cruise tax. To determine the values to which advocates of a cruise tax should appeal, we interviewed twenty policymakers and stakeholders in Ísafjörður and Reykjavík, Iceland with the laddering method. As an extension of the study I interviewed one government administrator and one cruise tax advocate in Southeast Alaska to compile lessons learned from the implementation of the Alaska Cruise Ship Tax Initiative. The values from each location were compared to find which lessons would be relevant for Icelanders. The value categories that would influence the tax’s implementation were good governance, cultural richness, quality of life, regional survival, economic growth, nature’s inherent value and resource-based life. Icelandic participants showed low faith in government’s efficacy – i.e. ability to do what it says it will do – and expressed concerns that dependence on tourism and the national government’s marginalization of the Westfjords could negatively impact regional survival. Overall, sustainable tourism development and environmental protection of natural areas were favored by Icelandic interviewees. To advocate a per-passenger environmental tax, stakeholders and policymakers could emphasize the tax’s capacity to encourage sustainable tourism development by building environmental infrastructure (especially paths and waste treatment facilities) and limiting mass tourism. Based on Alaskan experiences, Icelanders should strongly reconsider their dismissal of monitoring if they want to ensure a pristine environment.Item Open Access A novel, non-apoptotic role for Scythe/BAT3: a functional switch between the pro- and anti-proliferative roles of p21 during the cell cycle.(2012) Yong, Sheila T.Scythe/BAT3 is a member of the BAG protein family whose role in apoptosis, a form of programmed cell death, has been extensively studied. However, since the developmental defects observed in Bat3‐null mouse embryos cannot be explained solely by defects in apoptosis, I investigated whether BAT3 is also involved in regulating cell‐cycle progression. Using a stable‐inducible Bat3‐knockdown cellular system, I demonstrated that reduced BAT3 protein level causes a delay in both the G1/S transition and G2/M progression. Concurrent with these changes in cell‐cycle progression, I observed a reduction in the turnover and phosphorylation of the CDK inhibitor p21. p21 is best known as an inhibitor of DNA replication; however, phosphorylated p21 has also been shown to promote G2/M progression. Additionally, I observed that the p21 turnover rate was also reduced in Bat3‐knockdown cells released from G2/M synchronization. My findings indicate that in Bat3‐knockdown cells, p21 continues to be synthesized during cell‐cycle phases that do not normally require p21, resulting in p21 protein accumulation and a subsequent cell‐cycle delay. Finally, I showed that BAT3 co‐localizes with p21 during the cell cycle and is required for the translocation of p21 from the cytoplasm to the nucleus during the G1/S transition and G2/M progression. My study reveals a novel, non‐apoptoticrole for BAT3 in cell‐cycle regulation. By maintaining low p21 protein level during G1/S transition, BAT3 counteracts the inhibitory effect of p21 on DNA replication and thus enables the cells to progress from G1 into S phase. Conversely, during G2/M progression, BAT3 facilitates p21 phosphorylation, an event that promotes G2/M progression. BAT3 modulates these pro‐ and anti‐proliferative roles of p21 at least in part by regulating the translocation of p21 between the cytoplasm and nucleus of the cells to ensure proper functioning and regulation of p21 in the appropriate intracellular compartments during different cell‐cycle phases.Item Open Access A PK2/Bv8/PROK2 antagonist suppresses tumorigenic processes by inhibiting angiogenesis in glioma and blocking myeloid cell infiltration in pancreatic cancer.(2011) Curtis, Valerie ForbesIn many cancer types, infiltration of bone marrow-derived myeloid cells in the tumor microenvironment is often associated with enhanced angiogenesis and tumor progression, resulting in poor prognosis. The polypeptide chemokine PK2 (Bv8) regulates myeloid cell mobilization from the bone marrow, leading to activation of angiogenesis as well as accumulation of macrophages and neutrophils in the tumor site. Neutralizing antibodies against PK2 display potent anti-tumor efficacy, illustrating the potential of PK2-antagonists as therapeutic agents for the treatment of cancer. However, antibody-based therapies can be too large to treat certain diseases and too expensive to manufacture while small molecule therapeutics are not prohibitive in these ways. In this study, we demonstrate the anti-tumor activity of a small molecule PK2 antagonist, PKRA7, in the contexts of glioblastoma and pancreatic cancer xenograft tumor models. In the highly vascularized glioblastoma, PKRA7 decreased blood vessel density while increasing necrotic areas in the tumor mass. Consistent with the anti-angiogenic activity of PKRA7 in vivo, this compound effectively reduced PK2-induced microvascular endothelial cell branching in vitro. For the poorly vascularized pancreatic cancer, the primary anti-tumor effect of PKRA7 is mediated by the blockage of myeloid cell migration and infiltration. At the molecular level, PKRA7 inhibits PK2-induced expression of several pro-migratory chemokines and chemokine receptors in macrophages. Combining PKRA7 treatment with standard chemotherapeutic agents resulted in enhanced effects in xenograft models for both glioblastoma and pancreatic tumors. Taken together, our results indicate that the anti-tumor activity of PKRA7 can be mediated by distinct mechanisms that are relevant to the pathological features of the specific type of cancer. This small molecule PK2 antagonist holds the promise to be further developed as an effective agent for combinational cancer therapy.Item Open Access A study of the aeroelastic behavior of flat plates and membranes with mixed boundary conditions in axial subsonic flow(2011) Bloomhardt, Elizabeth M.In support of the noise reduction targets for future generations of transport aircraft, as set forth by NASA, the fundamental aeroelastic behavior of trailing edge flap technology was explored. Using a plate structural model to approximate the structural configuration and linear potential flow theory to represent the aerodynamics, aeroelastic behavior was characterized for two structural configurations using two different sets of boundary conditions for each. The two structural configurations considered were a) all edges fixed and b) leading and side edges fixed, trailing edge free. In each configuration both simply supported and clamped boundary conditions were considered. Results are compared to calculations presented in the literature for the all edges simply supported configuration.Item Open Access An efficient finite element method for embedded interface problems(2013) Annavarapu, ChandrasekharWe focus on developing a computationally efficient finite element method for interface problems. Finite element methods are severely constrained in their ability to resolve interfaces. Many of these limitations stem from their inability in independently representing interface geometry from the underlying discretization. We propose an approach that facilitates such an independent representation by embedding interfaces in the underlying finite element mesh. This embedding, however, raises stability concerns for existing algorithms used to enforce interfacial kinematic constraints. To address these stability concerns, we develop robust methods to enforce interfacial kinematics over embedded interfaces. We begin by examining embedded Dirichlet problems – a simpler class of embedded constraints. We develop both stable methods, based on Lagrange multipliers,and stabilized methods, based on Nitsche’s approach, for enforcing Dirichlet constraints over three-dimensional embedded surfaces and compare and contrast their performance. We then extend these methods to enforce perfectly-tied kinematics for elastodynamics with explicit time integration. In particular, we examine the coupled aspects of spatial and temporal stability for Nitsche’s approach.We address the incompatibility of Nitsche’s method for explicit time integration by (a) proposing a modified weighted stress variational form, and (b) proposing a novel mass-lumpingprocedure.We revisit Nitsche’s method and inspect the effect of this modified variational form on the interfacial quantities of interest. We establish that the performance of this method, with respect to recovery of interfacial quantities, is governed significantly by the choice for the various method parameters viz.stabilization and weighting. We establish a relationship between these parameters and propose an optimal choice for the weighting. We further extend this approach to handle non-linear,frictional sliding constraints at the interface. The naturally non-symmetric nature of these problems motivates us to omit the symmetry term arising in Nitsche’s method.We contrast the performance of the proposed approach with the more commonly used penalty method. Through several numerical examples, we show that with the pro-posed choice of weighting and stabilization parameters, Nitsche’s method achieves the right balance between accurate constraint enforcement and flux recovery - a balance hard to achieve with existing methods. Finally, we extend the proposed approach to intersecting interfaces and conduct numerical studies on problems with junctions and complex topologies.Item Open Access An entirely cell-based system to generate single-chain antibodies against cell surface receptors.(2008) Chen, Yu-Hsun JasonThe generation of recombinant antibodies (Abs) using phage display is a proven method to obtain a large variety of Abs that bind with high affinity to a given antigen (Ag). Traditionally, the generation of single chain Abs depends on the use of recombinant proteins in several stages of the procedure. This can be a problem, especially in the case of cell surface receptors, because Abs generated and selected against recombinant proteins may not bind the same protein expressed on a cell surface in its native form and because the expression of some receptors as recombinant proteins is problematic. To overcome these difficulties, we developed a strategy to generate single chain Abs that does not require the use of purified protein at any stage of the procedure. In this strategy, stably transfected cells are used for the immunization of mice, measuring Ab responses to immunization, panning the phage library, high throughputs creening of arrayed phage clones, and characterization of recombinant single chain variable regions(scFvs). This strategy was used to generate a panel of single chain Abs specific for the innate immunity receptor Toll‐like receptor2 (TLR2). Once generated, individual scFvs were subcloned into an expression vector allowing the production of recombinant antibodies in insect cells, thus avoiding the contamination of recombinant Abs with microbial products. This cell‐based system efficiently generates Abs that bind native molecules displayed on cell surfaces, bypasses the requirement of recombinant protein production, and avoids risks of microbial component contamination. However, an inconvenience of this strategy is that it requires construction of a new library for each target TLR. This problem might be solved by using non‐immune antibody libraries to obtain antibodies against multiple TLRs. Non‐immune libraries contain a wide variety of antibodies but these are often low affinity, while immune libraries, derived from immunized animals, containa high frequency of high affinity antibodies, but are typically limited to a single antigen. In addition, it can be difficult to produce non‐immune libraries with sufficient complexity to select Abs against multiple Ags. Because the re‐assortment of VH and VL regions that occurs during antibody library construction greatly increases library complexity, we hypothesized that an immune antibody library produced against one member of a protein family would contain antibodies specific for other members of the same protein family. Here, we tested this hypothesis by mining an existing anti‐hTLR2 antibody library for antibodies specific for other members of the TLR family. This procedure, which we refer to as homolog mining, proved to be effective. Using a cell‐based system to pan and screen our anti‐hTLR2 library, we identified single chain antibodies specific for three of the four hTLR2 homologs we targeted. The antibodies identified, anti‐murine TLR2, anti‐hTLR5, and anti‐hTLR6, bind specifically to their target, with no cross‐reactivity to hTLR2 or other TLRs tested. These results demonstrate that combinatorial re‐assortment of VH and VL fragments during Ab library construction increases Ab repertoire complexity, allowing antibody libraries produced by immunization with one antigen to be used to obtain antibodies specific to related antigens. The principle of homolog mining may be extended to other protein families and will facilitate and accelerate antibody production processes. In conclusion, we developed an entirely cell‐based method to generate antibodies that bind to native molecules on the cell surface, while eliminating the requirement of recombinant proteins and the risk of microbial component contamination. With homolog mining, this system is capable of generating antibodies not only against the original immunized Ag, but also against homologous Ags. In combination, this system proved to be an effective and efficient means for generating multiple antibodies that bind to multiple related Ags as they are displayed on cell surfaces.Item Open Access Analysis and Comparison of Queues with Different Levels of Delay Information.(2007) Guo, PengfeiInformation about delays can enhance service quality in many industries. Delay information can take many forms, with different degrees of precision. Different levels of information have different effects on customers and so on the overall system. The goal of this research is to explore these effects. We first consider a queue with balking under three levels of delay information: No information, partial information (the system occupancy) and full information (the exact waiting time). We assume Poisson arrivals, independent, exponential service times, and a single server. Customers decide whether to stay or balk based on their expected waiting costs, conditional on the information provided. By comparing the three systems, we identify some important cases where more accurate delay information improves performance. In other cases, however, information can actually hurt the provider or the customers. We then investigate the impacts on the system of different cost functions and weight distributions. Specifically, we compare systems where these parameters are related by various stochastic orders, under different information scenarios. We also explore the relationship between customer characteristics and the value of information. The results here are mostly negative. We find that the value of information need not be greater for less patient or more risk-averse customers. After that, we extend our analysis to systems with phase-type service times. Our analytical and numerical results indicate that the previous conclusions about systems with exponential service times still hold for phase-type service times. We also show that service-time variability degrades the system’s performance. At last, we consider two richer models of information: In the first model, an arriving customer learns an interval in which the system occupancy falls. In the second model, each customer’s service time is the sum of a geometric number of i.i.d. exponential phases, and an arriving customer learns the total number of phases remaining in the system. For each information model, we compare two systems, identical except that one has more precise information. We study the effects of information on performance as seen by the service provider and the customers.Item Open Access Attenuation of inflammatory events in human intervertebral disc cells with a tumor necrosis factor antagonist.(2010) Sinclair, Steven MichaelSTUDY DESIGN: The inflammatory responses of primary human intervertebral disc (IVD) cells to tumor necrosis factor α (TNF-α) and an antagonist were evaluated in vitro. OBJECTIVE: To investigate an ability for soluble TNF receptor type II (sTNFRII) to antagonize TNF-α-induced inflammatory events in primary human IVD cells in vitro. SUMMARY OF BACKGROUND DATA: TNF-α is a known mediator of inflammation and pain associated with radiculopathy and IVD degeneration. sTNFRs and their analogues are of interest for the clinical treatment of these IVD pathologies, although information on the effects of sTNFR on human IVD cells remains unknown. METHODS: IVD cells were isolated from surgical tissues procured from 15 patients and cultured with or without 1.4 nmol/L TNF-α (25 ng/mL). Treatment groups were coincubated with varying doses of sTNFRII (12.5-100 nmol/L). Nitric oxide (NO), prostaglandin E₂ (PGE₂), and interleukin-6 (IL6) levels in media were quantified to characterize the inflammatory phenotype of the IVD cells. RESULTS: Across all patients, TNF-α induced large, statistically significant increases in NO, PGE₂, and IL6 secretion from IVD cells compared with controls (60-, 112-, and 4-fold increases, respectively; P < 0.0001). Coincubation of TNF-α with nanomolar doses of sTNFRII significantly attenuated the secretion of NO and PGE₂ in a dose-dependent manner, whereas IL6 levels were unchanged. Mean IC₅₀ values for NO and PGE₂ were found to be 35.1 and 20.5 nmol/L, respectively. CONCLUSION: Nanomolar concentrations of sTNFRII were able to significantly attenuate the effects of TNF-α on primary human IVD cells in vitro. These results suggest this sTNFR to be a potent TNF antagonist with potential to attenuate inflammation in IVD pathology.Item Open Access B-lymphocyte effector functions in health and disease.(2010) DiLillo, David JohnB cells and humoral immunity make up an important component of the immune system and play a vital role in preventing and fighting off infection by various pathogens. B cells also have been implicated in the pathogenesis of autoimmune disease. However, the various functions that B cells perform during the development and maintenance of autoimmune conditions remain unclear. Therefore, the overall goal of this dissertation was to determine what roles B cells play during autoimmune disease. In the Chapter 3 of this dissertation, the function of B cells was assessed during tumor immunity, a model of immune system activation and cellular immunity. To quantify B cell contributions to T cell-mediated anti-tumor immune responses, mature B cells were depleted from wild type adult mice using CD20 monoclonal antibody (mAb) prior to syngeneic B16 melanoma tumor transfers. Remarkably, subcutaneous (s.c.) tumor volume and lung metastasis were increased two-fold in B cell-depleted mice. Effector-memory and interferon (IFN)γ or tumor necrosis factor (TNF)α-secreting CD4+ and CD8+ T cell induction was significantly impaired in B cell-depleted mice with tumors. Tumor antigen (Ag)-specific CD8+ T cell proliferation was also impaired in tumor-bearing mice that lacked B cells. Thus, B cells were required for optimal T cell activation and cellular immunity in this in vivo non-lymphoid tumor model. In Chapter 4 of this dissertation, the roles that B cells play during immune responses elicited by different allografts were assessed, since allograft rejection is thought to be T cell-mediated. The effects of B cell-depletion on acute cardiac rejection, chronic renal rejection, and skin graft rejection were compared using CD20 or CD19 mAbs. Both CD20 and CD19 mAbs effectively depleted mature B cells, while CD19 mAb treatment depleted plasmablasts and some plasma cells. B cell depletion did not affect acute cardiac allograft rejection, although CD19 mAb treatment prevented allograft-specific IgG production. Nonetheless, CD19 mAb treatment significantly reduced renal allograft rejection and abrogated allograft-specific IgG development, while CD20 mAb treatment did not. By contrast, B cell depletion exacerbated skin allograft rejection and augmented the proliferation of adoptively transferred alloAg-specific CD4+ T cells, demonstrating that B cells can also negatively regulate allograft rejection. Thereby, B cells can either positively or negatively regulate allograft rejection depending on the nature of the allograft and the intensity of the rejection response. Serum antibody (Ab) is, at least in part, responsible for protection against pathogens and tissue destruction during autoimmunity. In Chapter 5 of this dissertation, the mechanisms responsible for the maintenance of long-lived serum Ab levels were examined, since the relationship between memory B cells, long-lived plasma cells, and long-lived humoral immunity remains controversial. To address the roles of B cell subsets in the longevity of humoral responses, mature B cells were depleted in mice using CD20 mAb. CD20+ B cell depletion prevented humoral immune responses and class switching, and depleted existing and adoptively-transferred B cell memory. Nonetheless, B cell depletion did not affect serum Ig levels, Ag-specific Ab titers, or bone marrow (BM) Ab-secreting plasma cell numbers. Co-blockade of LFA-1 and VLA-4 adhesion molecules temporarily depleted long-lived plasma cells from the BM. CD20+ B cell depletion plus LFA-1/VLA-4 mAb treatment significantly prolonged Ag-specific plasma cell depletion from the BM, with a significant decrease in Ag-specific serum IgG. Collectively, these results indicate that BM plasma cells are intrinsically long-lived. Further, these studies now demonstrate that mature and memory B cells are not required for maintaining BM plasma cell numbers, but are required for repopulation of plasma cell-deficient BM. Thereby, depleting mature and memory B cells does not have a dramatic negative effect on pre-existing Ab levels. Collectively, the studies described in this dissertation demonstrate that B cells function through multiple effector mechanisms to influence the course and intensity of normal and autoreactive immune responses: the promotion of cellular immune responses and CD4+ T cell activation, the negative regulation of cellular immune responses, and the production and maintenance of long-lived Ag-specific serum Ab titers. Therefore, each of these three B cell effector mechanisms can contribute independently or in concert with the other mechanisms to clear pathogens or cause tissue damage during autoimmunity.Item Open Access Barnacle cement: a polymerization model based on evolutionary concepts.(2009-11) Dickinson, Gary H.The tenacity by which barnacles adhere has sparked a long history of scientific investigation into their adhesive mechanisms. To adhere, barnacles utilize proteinaceous cement that rapidly polymerizes and forms adhesive bonds underwater, and is insoluble once polymerized. Although progress has been made towards understanding the chemical properties of cement proteins, the biochemical mechanisms of cement polymerization remain largely unknown. In this dissertation, I used evolutionary concepts to elucidate barnacle cement polymerization. Well-studied biological phenomena (blood coagulation in vertebrates and invertebrates) were used as models to generate hypotheses on proteins/biochemical mechanisms involved in cement polymerization. These model systems are under similar selective pressures to cement polymerization (life or death situations) and show similar chemical characteristics (soluble protein that quickly/efficiently coagulates). I describe a novel method for collection of unpolymerized cement. Multiple, independent techniques (AFM, FTIR, chemical staining for peroxidase and tandem mass spectroscopy) support the validity of the collection technique. Identification of a large number of proteins besides ‘barnacle cement proteins’ with mass spectrometry, andobservations of hemocytes in unpolymerized cement inspired the hypothesis that barnacle cement is hemolymph. A striking biochemical resemblance was shown between barnacle cement polymerization and vertebrate blood coagulation. Clotted fibrin and polymerized cement were shown to be structurally similar (mesh of fibrous protein) but biochemically distinct. Heparin, trypsin inhibitor and Ca2+ chelators impeded cement polymerization, suggesting trypsin and Ca2+ involvement in polymerization. The presence/activity of a cement trypsin-like serine protease was verified and shown homologous to bovine pancreatic trypsin. Protease activity may activate cement structural precursors, allowing loose assembly with other structural proteins and surface rearrangement. Tandem mass spectrometry and Western blotting revealed a homologous protein to human coagulation factor XIII (fibrin stabilizing factor: transglutaminase that covalently cross-links fibrin monomers). Transglutaminase activity was verified and may covalently cross-link assembled cement monomers. Similar to other protein coagulation systems, heritable defects occur during cement polymerization. High plasma protein concentration combined with sub-optimal enzyme, and/or cofactor concentrations and sub-optimal physical/muscular parameters (associated with hemolymph release) results in improperly cured cement in certain individuals when polymerization occurs in contact with low surface energy silicone and its associated leached molecules.Item Open Access Bayesian Analysis of Latent Threshold Dynamic Models(2012) Nakajima, JochiTime series modeling faces increasingly high-dimensional problems in many scientific areas. Lack of relevant, data-based constraints typically leads to increased uncer-tainty in estimation and degradation of predictive performance. This dissertation addresses these general questions with a new and broadly applicable idea based on latent threshold models. The latent threshold approach is a model-based framework for inducing data-driven shrinkage of elements of parameter processes, collapsing them fully to zero when redundant or irrelevant while allowing for time-varying non-zero values when supported by the data. This dynamic sparsity modeling technique is implemented in broad classes of multivariate time series models with application tovarious time series data. The analyses demonstrate the utility of the latent threshold idea in reducing estimation uncertainty and improving predictions as well as model interpretation. Chapter 1 overviews the idea of the latent threshold approach and outlines the dissertation. Chapter 2 introduces the new approach to dynamic sparsity using latent threshold modeling and also discusses Bayesian analysis and computation for model fitting. Chapter 3 describes latent threshold multivariate models for a wide range of applications in the real data analysis that follows. Chapter 4 provides US and Japanese macroeconomic data analysis using latent threshold VAR models. Chapter 5 analyzes time series of foreign currency exchange rates (FX) using latent thresh-old dynamic factor models. Chapter 6 provides a study of electroencephalographic (EEG) time series using latent threshold factor process models. Chapter 7 develops a new framework of dynamic network modeling for multivariate time series using the latent threshold approach. Finally, Chapter 8 concludes the dissertation with open questions and future works.Item Open Access Beach and sea-cliff dynamics as a driver of long-term rocky coastline evolution and stability(Geology, 2012) Limber, Patrick WaylandRocky coastlines, with wave-battered headlands interspersed with calm sandy beaches, stir imaginations and aesthetic sensibilities the way few other landscapes do. Despite their prevalence (sea cliffs or bluffs are present along nearly 75% of the world’s oceanic coastlines), we know very little about how rocky coastlines evolve. Quantitative studies of large-scale (>1 km) rocky coastline evolution are just beginning, and this work asks several unresolved and fundamental questions. For example, what determines the planform morphology of a rocky coastline? Can it reach an equilibrium configuration and cross-shore amplitude? What rocky coastline processes and characteristics scale the formation time and size of sea stacks? The overarching theme of the following four chapters is the dynamics between beaches and sea-cliffs. Sea-cliff erosion and retreat is a primary source of beach sediment on rocky coastlines. As cliffs contribute sediment to the beach, it is distributed by alongshore sediment transport, and the beach can control future rates of sea-cliff retreat in two main ways: in small amounts, sediment can accelerate cliff retreat by acting as an abrasive tool, and in larger amounts, the beach acts as a protective cover by dissipating wave energy seaward of the sea-cliff. These feedbacks have been observed on rocky coastlines and in laboratory experiments, but have not been explored in terms of their control on large-scale and long-term (i.e., millennia) rocky coastline evolution. The aim of this dissertation is to explore the range of ways that beach and sea-cliff dynamics can drive rocky coastline evolution with simple analytical and numerical models, and to generate testable predictions.Item Open Access Characterization of Image Quality for 3D Scatter Corrected Breast CT Images.(2012) Pachon, Jan HarwinThe goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.Item Open Access Comparative performance of multiview stereoscopic and mammographic display modalities for breast lesion detection.(2010) Webb, Lincoln JonPURPOSE: Mammography is known to be one of the most difficult radiographic exams to interpret. Mammography has important limitations, including the superposition of normal tissue that can obscure a mass, chance alignment of normal tissue to mimic a true lesion and the inability to derive volumetric information. It has been shown that stereomammography can overcome these deficiencies by showing that layers of normal tissue lay at different depths. If standard stereomammography (i.e., a single stereoscopic pair consisting of two projection images) can significantly improve lesion detection, how will multiview stereoscopy (MVS), where many projection images are used, compare to mammography? The aim of this study was to assess the relative performance of MVS compared to mammography for breast mass detection. METHODS: The MVS image sets consisted of the 25 raw projection images acquired over an arc of approximately 45 degrees using a Siemens prototype breast tomosynthesis system. The mammograms were acquired using a commercial Siemens FFDM system. The raw data were taken from both of these systems for 27 cases and realistic simulated mass lesions were added to duplicates of the 27 images at the same local contrast. The images with lesions (27 mammography and 27 MVS) and the images without lesions (27 mammography and 27 MVS) were then postprocessed to provide comparable and representative image appearance across the two modalities. All 108 image sets were shown to five full-time breast imaging radiologists in random order on a state-of-the-art stereoscopic display. The observers were asked to give a confidence rating for each image (0 for lesion definitely not present, 100 for lesion definitely present). The ratings were then compiled and processed using ROC and variance analysis. RESULTS: The mean AUC for the five observers was 0.614 +/- 0.055 for mammography and 0.778 +/- 0.052 for multiview stereoscopy. The difference of 0.164 +/- 0.065 was statistically significant with a p-value of 0.0148. CONCLUSIONS: The differences in the AUCs and the p-value suggest that multiview stereoscopy has a statistically significant advantage over mammography in the detection of simulated breast masses. This highlights the dominance of anatomical noise compared to quantum noise for breast mass detection. It also shows that significant lesion detection can be achieved with MVS without any of the artifacts associated with tomosynthesis.Item Open Access Compressive holography.(2012) Lim, Se HoonCompressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography : estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field information with a small fill factor because object scattered fields are locally redundant. Incoherent image estimation is adopted for the expanded modulation transfer function and compressive reconstruction.Item Open Access Coordinated analysis of delayed sprites with high-speed images and remote electromagnetic fields(2010) Li, JingboOne of the most dramatic discoveries in solar-terrestrial physics in the past two decades is the sprite, a high altitude optical glow produced by a lightning discharge. Previous sprite studies including both theoretical modeling and remote measurements of optical emissions and associated radio emissions have revealed many important features. However, in-situ measurements, which are critical for understanding the microphysics in sprites and constraining the existing models, are almost impossible because of the sprites' small time scale (a few ms) and large spatial scale (tens of km). In this work, we infer the lightning-driven ambient electric fields by combining remote measured electromagnetic fields with numerical simulations. To accomplish this, we first extract the lightning source current from remotely measured magnetic fields with a deconvolution technique. Then we apply this current source to an existing 2-D Finite Difference Time Domain (FDTD) model to compute the electric fields at sprite altitudes. These inferred electric fields make up for the deficiency of lacking in-situ measurements. A data set collected at two observation sites in 2005 combines simultaneous measurements of sprite optical emissions and sprite-producing lightning radiated electromagnetic fields. Sprite images from a high speed camera and the measured wideband magnetic fields removed the limitations imposed by the small sprite temporal scale and allow us to precisely determine the sprite initiation time and the time delay from its parent lightning discharge. For 83 sprites analyzed, close to 50% of them are delayed for more than 10 ms after the lightning discharges and empirically defined as long-delayed sprites. Compared with short-delayed sprites, which are driven by the lightning return stroke, all these long-delayed sprites are associated with intense continuing current and large total charge moment changes. Besides that, sferic bursts and slow intensifications are frequently detected before those long-delayed sprites. These observations suggest a different initiation mechanism of long-delayed sprites. To reveal that, we inferred the lightning-driven electric fields at the sprite initiation time and altitude. Our results show that although long-delayed sprites are mainly driven by the continuing current instead of the lightning return stroke, the electric fields required to produce those long-delayed sprites are essentially the same as fields to produce short-delayed sprites. Thus the initiation mechanism of long delayed sprite is consistent with the conventional breakdown model. Our results also revealed that the slow (5{20ms) intensifications in continuing current can significantly increase high altitude electric fields and play a major role in initiating delayed sprite. Sferic bursts, which were suggested as a direct cause of long-delayed sprites in previous studies, are linked to slow intensifications but not causal. Previous studies from remote measured low frequency radio emissions indicate that substantial electric current flows inside the sprite body. This charge motion, with unknown location and amount, is related to the detailed internal microphysics of sprite development that is in turn connected to the impact sprites have on the mesosphere. In our data, the recorded high speed images show the entire development history of sprite streamers. By assuming streamers propagate along the direction of local electric fields, we estimate the amount of electric charge in sprites. Our results show that individual bright core contains significant negative space charge between 0.01 to 0.03 C. Numerical simulations also indicate that this sprite core region is at least partial or perhaps the dominant source of the positive charge in the downward positive polarity streamers. Thus the average amount of charge in each downward streamer is at least 2 - 4 103 C. The connection between these charge regions is consistent with previous observations. The reported amount and location of the electric charge provide the initial condition and key data to constrain the existing streamer models. After initiation, sprite streamers propagate in the inhomogeneous medium from a strong field region to a weak field region. The propagation properties reflect the physics in sprite development. For the first time we measured the downward streamer propagation behaviors over the full sprite altitude extent. We found that downward streamers accelerate to a maximum velocity of 1 - 3 x 107 m/s and then immediately decelerate at an almost constant rate close to 10 10 m/s2. The deceleration processes dominant downward streamer propagation in both time and distance. Lightning driven electric fields have been inferred at streamer tip locations during their propagation. We found that most of the deceleration process occurs at a electric field less than 0.1 Ek. The results also show the dependence of sprite termination altitude on the ambient electric field. A minimum ambient electric field about 0.05 Ek is consistently observed for streamers in different sprites or at different locations in a single sprite. These streamer propagation properties as well as their connections to the ambient electric fields can be applied to further constrain the streamer models.Item Open Access Deposition of silver nanoparticles in geochemically heterogeneous porous media: predicting affinity from surface composition analysis.(2011) Lin, ShihongThe transport of uncoated silver nanoparticles (AgNPs) in a porous medium composed of silica glass beads modified with a partial coverage of iron oxide (hematite) was studied and compared to that in a porous medium composed of unmodified glass beads (GB). At a pH lower than the point of zero charge (PZC) of hematite, the affinity of AgNPs for a hematite-coated glass bead (FeO-GB) surface was significantly higher than that for an uncoated surface. There was a linear correlation between the average nanoparticle affinity for media composed of mixtures of FeO-GB and GB collectors and the relative composition of those media as quantified by the attachment efficiency over a range of mixing mass ratios of the two types of collectors, so that the average AgNPs affinity for these media is readily predicted from the mass (or surface) weighted average of affinities for each of the surface types. X-ray photoelectron spectroscopy (XPS) was used to quantify the composition of the collector surface as a basis for predicting the affinity between the nanoparticles for a heterogeneous collector surface. A correlation was also observed between the local abundances of AgNPs and FeO on the collector surface.Item Open Access Distinct functions of POT1 at telomeres.(2008) Kendellen, Megan FullerTelomeres are nucleoprotein complexes that constitute the ends of eukaryotic chromosomes. Telomeres differentiate the end of the chromosome from sites of DNA damage and control cellular replicative potential. The loss of function of telomeres results in several biological consequences. First, dysfunctional telomeres elicit DNA damage responses and repair activities, which frequently induce cytogenetic abnormalities and genomic instability that are characteristic of human cancer. Second, cellular immortalization resulting from inappropriate elongation of telomeres is a critical component of tumorigenesis. Alternatively, as telomere shortening limits replicative potential, abnormally short telomeres can result in premature cellular senescence that is associated with human pathology ranging from anemia to atherosclerosis. Telomeric DNA is composed of tandem repeats of G‐rich double‐stranded (ds)DNA that terminates in a G‐rich 3’ single‐stranded (ss)DNA overhang. Telomeres are thought to assume a lariat structure termed the t‐loop, which is decorated by an assortment of telomere‐associated proteins. The most unique and least well characterized of these proteins is POT1. POT1 binds telomeric ssDNA via a pair of Nterminal OB‐folds. Through its C‐terminal protein‐interaction domain, POT1 directly binds the telomeric dsDNA‐binding protein TRF2 and participates in heterodimeric complex with the protein TPP1. Inhibition of POT1 induces a robust DNA damage response at telomeres and deregulation of telomere length homeostasis, indicating that POT1 is important in maintaining telomere stability and in regulating telomere length. The goal of my thesis work was to determine which of the three major functions of POT1– telomeric ssDNA‐, TPP1‐, or TRF2‐binding – were required to properly localize POT1 to telomeres and to prevent the telomere instability and length deregulation that occur in the absence of POT1. Using separation‐of‐function mutants of POT1 deficient in at least one of these activities, I found that POT1 depends on its heterodimeric partner TPP1 in cis with telomeric ssDNA‐binding to preserve telomere stability, while POT1 depends on its protein interaction with TRF2 to localize to telomeres and its TRF2‐ and telomeric ssDNA‐binding activities in cis to regulate telomere length.Item Open Access 'ECHO'(FOUR QUARTERS, 1983) Wiegman, RItem Open Access Efficient selection of disambiguating actions for stereo vision(2010) Schaeffer, MonikaIn many domains that involve the use of sensors, such as robotics or sensor networks, there are opportunities to use some form of active sensing to disambiguate data from noisy or unreliable sensors. These disambiguating actions typically take time and expend energy. One way to choose the next disambiguating action is to select the action with the greatest expected entropy reduction, or information gain. In this work, we consider active sensing in aid of stereo vision for robotics. Stereo vision is a powerful sensing technique for mobile robots, but it can fail in scenes that lack strong texture. In such cases, a structured light source, such as vertical laser line, can be used for disambiguation. By treating the stereo matching problem as a specially structured HMM-like graphical model, we demonstrate that for a scan line with n columns and maximum stereo disparity d, the entropy minimizing aim point for the laser can be selected in O(nd) time - cost no greater than the stereo algorithm itself. A typical HMM formulation would suggest at least O(nd2) time for the entropy calculation alone.
- «
- 1 (current)
- 2
- 3
- »