Browsing by Subject "Modeling"
- Results Per Page
- Sort Options
Item Open Access 1 Linking Land Use and Water Quality: Guiding Development Surrounding Durham County’s Drinking Watershed(2012-04-26) Levin, Katie RoseAbstract Linking Land Use and Water Quality: Guiding Development Surrounding Durham County’s Drinking Watershed By Katie Rose Levin May 2012 Cities and Counties have an obligation to provide water to their citizens in the quality and quantity necessary to support a viable existence. To meet these demands, in 1929 Durham City dammed the Flat River, creating the reservoir named “Lake Michie” in the far north eastern part of Durham County. Although located in a primarily rural area, there are signs that stormwater runoff is having detrimental effects on Lake Michie. The reservoir has already lost a quarter of its holding capacity to sedimentation, and was recently classified as Eutrophic by the USGS. Development pressure will only increase, as for the last ten years Durham County’s population has grown faster than the average across the state. To address development concerns, Durham county and city created the Unified Development Ordinance (UDO) which provides enhanced protection for the land in the Lake Michie Watershed. The UDO limits the amount of impervious surface allowed on any one parcel in the watershed to 6%, while allowing a transfer of development between parcels to discourage urban sprawl. In addition to the protection afforded by codes, Durham managers are interested in creating a unified conservation scheme, based on preserving parcels as forested areas. This Project provides information and maps that can be used for conservation planning. Through combining topography, soils, and land use, areas likely to have highest impact on water quality are highlighted. Using this information, parcels can be evaluated based on their relative impact on water quality. Likewise, parcels can be compared against each other for the relative impact they have on water quality, informing transfers of impervious surface areas to meet development code. By combining the scientific evaluation of land use effects with the political boundaries of parcel ownership officials can easily translate science into the politics of conservation and development. Just like the New Hope Creek and Eno River conservation maps, now Lake Michie has a scientifically based conservation map to help officials and land managers preserve water quality into the future. Adviser: Dr. Dean UrbanItem Open Access A mathematical model to assist phytoremediation management and evaluation(2020-04-22) Wang, XinPhytoremediation is the use of plants and their associated microbes for environmental cleanup. The use of phytoremediation for soil cleanup faces a number of challenges of which leaching of soil contaminants below the rooting zone poses a significant environmental threat. Partitioning of contaminants between plant uptake and leaching is the focus of this Master’s Project (MP). An improved mathematical model to represent phytoremediation processes are developed that couple the hydrological balance and soil contaminant balance for a dynamic vegetation system (i.e. rooting zone depth and leaf area are changing in time during the remediation period). Two different measures of phytoremediation efficiency are then assessed with different water supply amount & frequency, soil & plant properties and climatic conditions. It is found that water supply pattern is a first-order factor controlling the efficiency of phytoremediation when viewed from the perspective of maximizing plant-water uptake of contaminants. Climate change could also exert significant influence by affecting growth patterns of the plant. Additionally, a geospatial analysis tool is also created with the model to locate areas where phytoremediation may be an effective management option, when the climatic and soil datasets are available. With this combined geospatial tool and the newly proposed model, phytoremediation managers can evaluate the potential phytoremediation efficiency according to their specific situation.Item Open Access Dental Ecometrics as a Proxy of Paleoenvironment Reconstruction in the Miocene of South America(2017) Spradley, Jackson PlesIn this dissertation I compile modern mammalian faunal lists, as well as ecomorphological measurements on living marsupials and rodents, to relate the diversity of small mammals, specifically the distributions of their dental topographies, to the climates in which they are found. The emphasis of this dissertation is to demonstrate the potential of distributions of dental topography metrics as proxies for the reconstruction of paleoenvironments in the Miocene of South America.
In Chapter 2, I compile complete, non-volant mammalian species lists for 85 localities across South America as well as 17 localities across Australia and New Guinea. Climatic and habitat variables were also recorded at each locality using GIS spatial data. Additionally, basic ecological data was collected for each species, including: diet, body size, and mode of locomotion. Niche indices that describe the relative numbers of different ecologies were calculated for each locality. These indices then served as the predictor values in a handful of regression models, including regression trees, random forests, and Gaussian process regression. The Australian/New Guinean localities were used as a geographically and phylogenetically independent for the purposes of testing the models derived from South America.
As for the dental ecomorphological analysis, I use three separate measures of dental topography, each of which measures a different component of dental topography; relief (the Relief Index, or RFI), complexity (orientation patch count rotated, OPCR), and sharpness (Dirichlet normal energy, DNE). Together, these metrics quantify the shape of the tooth surface without regard for tooth size. They also do not depend on homologous features on the tooth surface for comparative analysis, allowing a broad taxonomic sample as I present here. After a methodological study of DNE in Chapter 3, I present correlative studies of dental topography and dietary ecology in marsupials and rodents in Chapters 4 and 5, respectively. Finally, using the same localities from Chapter 2, I analyze the distributions of dental topography metrics as they relate to climate and habitat.
Results suggest that sharpness and relief are positively correlated with a higher amount of tough foods—such as leaves or insects—in the diet of marsupials, and that relief is positively correlated with grass-eating in rodents. The distributions of all three metrics show some utility when used as a proxy for climatic variables, though the distributions of RFI in marsupials and OPCR in rodents demonstrate the best correlations.
Overall, this dissertation suggests that dental topography can be used to discriminate dietary categories in a wide variety of mammalian groups, and that the distributions of dental ecometrics can be used as proxies for paleoenvironment reconstruction. This may eliminate the need to reconstruct behavior in individual taxa in order to construct ecological indices for fossil mammalian communities, thus offering a more direct avenue to reconstructing past environments.
Item Open Access Development and application of enhanced, high-resolution physiological features in XCAT phantoms for use in virtual clinical trials(2023) Sauer, ThomasVirtual imaging trials (VITs) are a growing part of medical imaging research. VITs are a powerful alternative to the current gold-standard for determining or verifying the efficacy of new technology in healthcare: the clinical trial. Prohibitively high expenses, multi-site standardization of protocols, and risks to the health of the trial’s patient population are all challenges associated with the clinical trial; conversely, these challenges highlight the strengths of virtualization, particularly with regard to evaluating medical imaging technologies.Virtual imaging requires a combination of virtual subjects, physics-based imaging simulation platforms, and virtual pathologies. Currently, most computational phantom organs and pathologies are segmented or generated from clinical CT images. With this approach, most computational organs and pathologies are necessarily static, comprising only a single instantaneous representation. Further, this static-anatomy–static-pathology approach does not address the underlying physiological constraints acting on the organs or their pathologies—making some imaging exams (e.g., perfusion, coronary angiography) difficult to simulate robustly. It also does not provide a clear path toward including anatomical and physiological (functional) detail at sub-CT resolution. This project aims to integrate high-resolution, dynamic features into computational human models. The focus is primarily an advanced model known as XCAT. These additions include healthy and progressive-disease anatomy and physiology, micron-level–resolution coronary artery lesions, and an array of pathologies. In particular, we focus on the physiology needed for CT perfusion studies, dynamic lesions, or coronary artery disease (CAD), and means to integrate each of these features into XCAT via custom software. The outcome is further to demonstrate the utility of each of these advances with representative simulated imaging. Chapter 1 presents a method using clinical information and physiological theory to develop a mathematical model that produces the liver vasculature within a given XCAT. The model can be used to simulate contrast perfusion by taking into account contrast position and concentration at an initial time t and the spatial extent of the contrast in the liver vasculature at subsequent times. The mathematical method enables the simulation of hepatic contrast perfusion in the presence or absence of abnormalities (e.g., focal or diffuse disease) for arbitrary imaging protocols, contrast concentrations, and virtual patient body habitus. The vessel growing method further generalizes to vascular models of other organs as it is based on a parameterized approach, allowing for flexible repurposing of the developed tool. Chapter 2 presents a method for using cardiac plaque histology and morphology data acquired at micron-level resolution to generate new, novel plaques informed by a large, original patient cohort. A methodology for curating and validating the anatomical and physiological realism was further applied to the synthesized plaques to ensure realism. This method was integrated with the XCAT heart and coronary artery models to allow simulated imaging of a wide variety of coronary artery plaques in varied orientations and with unique material distribution and composition. Generation of 200 unique plaques has been optimized to take as little as 5 seconds with GPU acceleration. This work enables future studies to optimize current and emerging CT imaging methods used to detect, diagnose, and treat coronary artery disease. Chapter 3 focuses on small-scale modeling of the internal structure of the bones of the chest. The internal structure of the bones appears as a diffuse but recognizable texture under medical imaging and corresponds to a complex physical structure tuned to meet the physical purpose of the bone (e.g., weight-bearing, protective structure, etc.). The project aimed to address the limitations of prior texture-based modelling by creating mathematically based fine bone structures. The method was used to generate realistic bone structures, defined as polygon meshes, with accurate morphological and topological detail for 45 chest bones for each XCAT phantom. This new method defines the spatial extent of the complementary bone–marrow structures that are the root cause of the characteristic image texture 1-4 and provides a transition from using image-informed characteristic power law textures to a ground-truth model with exact morphology—which we additionally paired with the DukeSim CT simulator5 and XCAT phantoms6 to produce radiography and CT images with physics-based bone textures. This work enables CT acquisition parameter optimization studies that can inform clinical image assessment of osteoporosis and bone fractures. Chapter 4 proposes a new model of lesion morphology and insertion and was created with the intent to be informed and validated by—rather than constrained by—imaging data. It additionally includes the new incorporation of biological data, intended to provide dynamic computational lung lesion models for use in CT simulation applications. Each chapter includes a section presenting an example application of the respective tools in virtual medical imaging. Chapter 5 concludes this work with a brief summary of the content and is followed by Appendices A–D. The appendices are organized by topic and contain a visual demonstration of the work in a series of high-resolution, full-page images.
Item Open Access Development and Calibration of Reaction Models for Multilayered Nanocomposites(2015) Vohra, ManavThis dissertation focuses on the development and calibration of reaction models for multilayered nanocomposites. The nanocomposites comprise sputter deposited alternating layers of distinct metallic elements. Specifically, we focus on the equimolar Ni-Al and Zr-Al multilayered systems. Computational models are developed to capture the transient reaction phenomena as well as understand the dependence of reaction properties on the microstructure, composition and geometry of the multilayers. Together with the available experimental data, simulations are used to calibrate the models and enhance the accuracy of their predictions.
Recent modeling efforts for the Ni-Al system have investigated the nature of self-propagating reactions in the multilayers. Model fidelity was enhanced by incorporating melting effects due to aluminum [Besnoin et al. (2002)]. Salloum and Knio formulated a reduced model to mitigate computational costs associated with multi-dimensional reaction simulations [Salloum and Knio (2010a)]. However, exist- ing formulations relied on a single Arrhenius correlation for diffusivity, estimated for the self-propagating reactions, and cannot be used to quantify mixing rates at lower temperatures within reasonable accuracy [Fritz (2011)]. We thus develop a thermal model for a multilayer stack comprising a reactive Ni-Al bilayer (nanocalorimeter) and exploit temperature evolution measurements to calibrate the diffusion parameters associated with solid state mixing (720 K - 860 K) in the bilayer.
The equimolar Zr-Al multilayered system when reacted aerobically is shown to
exhibit slow aerobic oxidation of zirconium (in the intermetallic), sustained for about 2-10 seconds after completion of the formation reaction. In a collaborative effort, we aim to exploit the sustained heat release for bio-agent defeat application. A simplified computational model is developed to capture the extended reaction regime characterized by oxidation of Zr-Al multilayers. Simulations provide insight into the growth phenomenon for the zirconia layer during the oxidation process. It is observed that the growth of zirconia is predominantly governed by surface-reaction. However, once the layer thickens, the growth is controlled by the diffusion of oxygen in zirconia.
A computational model is developed for formation reactions in Zr-Al multilayers. We estimate Arrhenius diffusivity correlations for a low temperature mixing regime characterized by homogeneous ignition in the multilayers, and a high temperature mixing regime characterized by self-propagating reactions in the multilayers. Experimental measurements for temperature and reaction velocity are used for this purpose. Diffusivity estimates for the two regimes are first inferred using regression analysis and full posterior distributions are then estimated for the diffusion parameters using Bayesian statistical approaches. A tight bound on posteriors is observed in the ignition regime whereas estimates for the self-propagating regime exhibit large levels of uncertainty. We further discuss a framework for optimal design of experiments to assess and optimize the utility of a set of experimental measurements for application to reaction models.
Item Open Access Explore Rb/E2F Activation Dynamics to Define the Control Logic of Cell Cycle Entry in Single Cells(2015) Dong, PengControl of E2F transcription factor activity, regulated by the action of the retinoblastoma tumor suppressor, is critical for determining cell cycle entry and cell proliferation. However, an understanding of the precise determinants of this control, including the role of other cell cycle regulatory activities, has not been clearly defined.
Recognizing that the contributions of individual regulatory components could be masked by heterogeneity in populations of cells, we made use of an integrated system to follow E2F transcriptional dynamics at the single cell level and in real time. We measured and characterized E2F temporal dynamics in the first cell cycle where cells enter the cell cycle after a period of quiescence. Quantitative analyses revealed that crossing a threshold of amplitude of E2F transcriptional activity serves as the critical determinant of cell-cycle commitment and division.
By using a developed ordinary differential equation model for Rb/E2F network, we performed simulations and predicted that Myc and cyclin D/E activities have distinct roles in modulating E2F transcriptional dynamics. Myc is critical in modulating the amplitude whereas cyclin D/E activities have little effect on the amplitude but do contribute to the modulation of duration of E2F transcriptional activation. These predictions were validated through the analysis of E2F dynamics in single cells under the conditions that cyclin D/E or Myc activities are perturbed by small molecule inhibitors or RNA interference.
In an ongoing study, we also measured E2F dynamics in cycling cells. We provide preliminary results showing robust oscillatory E2F expression at the single-cell level that aligns with the progression of continuous cell division. The temporal characteristics of the dynamics trajectories deserve further quantitative investigations.
Taken together, our results establish a strict relationship between E2F dynamics and cell fate decision at the single-cell level, providing a refined model for understanding the control logic of cell cycle entry.
Item Open Access Heterogeneous Aggregation Modeling A Step Towards Understanding the Transport and Fate of Nanoparticle Contaminants(2016) Therezien, MathieuThis work presents an improved aggregation model that accounts for two types of particles and simulates the heterogeneous aggregation between these particles. By accounting for the sizes, concentrations, and affinities of the nano- and background particles, the model can evaluate e.g. how the nanoparticles affect an existing distribution of natural aggregates or how quickly the nanoparticles will settle out of a given system, and can help determine which parameter to change in order to eliminate the nanoparticles from a system faster. The model could provide a powerful tool to evaluate the exposure of nanoparticles in environmental and engineered waters.
Item Open Access Making Models Work(2022) Finestone, KobiScientific models are used to investigate reality. Here “model” refers to a representation which is created by an agent for a particular inferential purpose. These purposes include but are not limited to explanation, prediction, exploration, classification, and measurement. Through modeling, scientists become capable of understanding the composition and structure of natural systems and social systems in a systematic manner constitutive of scientific research. This process of understanding is underwritten by a logical structure distinctive to scientific modeling.
Throughout this dissertation, I articulate, justify, and defend a specific account of the logical structure of scientific modeling. In order to do so, I detail economic models, which I contend are representative of general scientific modeling. Broadly, my account of scientific modeling can be decomposed into three distinct claims. First, I argue for understanding scientific modeling in terms representation. Following others, I then conceptualize representation in terms of purpose and relevant similarity. However, against this conceptualization are numerous counterarguments, which I proceed to detail and then disarm.
Second, I argue that the ideal scientific model is a useful model. Connectedly, I contend that in order for a scientific model to be useful, it must first be idealized. In order to demonstrate the necessity of idealizations for scientific modeling, I begin by detailing a number of idealization strategies and demonstrate how they are integral to the use of scientific models across the natural and social sciences. But in order to demonstrate that idealized models are not only useful but are ideal, I dismantle the putative ideal of completeness which holds that the ideal model completely represents reality in all its detail and complexity. However, as I demonstrate, completeness is neither achievable nor a legitimate aspiration for working scientist.
Third, I argue that in order to use scientific models, it is often necessary for scientists to alter them in order to better fit particular target systems. In order to explain the alteration process, I detail the representational continuum found across the sciences which stretches from highly concrete data models to highly abstract principles. Between these extremes are theoretical models and empirical models. In order to construct such models, scientists must engage in an exploratory process by which possibilities are mapped and relative likelihoods estimated. In this way, scientists can construct highly specialized models which can allow them to better pursue specific inferential purposes. All of this results in a division of inferential labor and associated efficiency gains which, I argue, are constitutive of scientific progress.
Item Embargo Mathematical Modeling of Topical Drug Delivery in Women’s Health(2023) Adrianzen Alvarez, Daniel RobertoOur lab focuses on developing and optimizing drug delivery systems for applications in women’s health. In this field, development of drugs and drug delivery systems is hindered by a heavy reliance on empirically derived data, usually obtained from non-standardized, highly variable in vitro and in vivo animal experiments. Further, without a mechanistic understanding of the various phenomena progressing during drug delivery, experiments tend to explore complex parameter spaces blindly and randomly. Deterministic mathematical models can improve the efficiency of this process by informing rational drug and product design. In this work, we were interested in two applications: 1. drug delivery of topically applied anti-HIV microbicides to the female reproductive tract; and 2. Localized intratumoral injections of ethanol-ethyl cellulose mixtures for treatment of cervical lesions. Development of topically applied anti-HIV microbicides to prevent sexual HIV transmission is inefficient, with in vitro and in vivo tests having limited applicability to real product use. This issue is exacerbated by the dependence of drug performance on adherence and drug-administration conditions, which are not tested until clinical trials. Further, the lack of a standardized pharmacodynamic (PD) metric that is dependent on the heterogenous dynamics of viral transport and infection makes it difficult to identify the most promising drug candidates. Here we develop a deterministic mathematical model that incorporates drug pharmacokinetics (PK) and viral transport and dynamics to estimate the probability of infection (POI) as a PD metric that can be computed for a variety of anti-HIV drugs in development. The model reveals key mechanistic insights into the spatiotemporally dependent dynamics of infection in the vaginal mucosa, including susceptibility to infection at different phases in the menstrual cycle. Further, it and can be used as a platform to test novel drugs under several conditions, such as the timing of drug administration relative to the time of HIV exposure. Localized injections of ablative agents, immunotherapeutics and chemotherapeutics have potential for increased therapeutic efficacy against tumors and reduced systemic effects. However, injection outcomes thus far have been largely unsatisfactory, due to unintended leakage of the active pharmaceutical ingredients (APIs) to non-target tissues. Adding a gelling or precipitating agent to the injection can help ameliorate this limitation, by acting to contain the API within the target tissue. One such example is injection of ethanol-ethyl cellulose mixtures. Due to the insolubility of ethyl cellulose in water, this polymer phase-separates in the aqueous tumor environment, forming a fibrous gel that helps contain ethanol, the current ablating agent (and chemotherapeutic drugs in the future), within the boundaries of the tumor. Our collaborators have shown that this strategy can be an effective low-cost treatment strategy for superficial solid tumors, with cervical cancer and cervical dysplasia, and liver cancer, being promising targets. Here we present a mathematical model that enables characterization of the injection process. Our model uses Cahn Hilliard theory to model the phase separation of a precipitating or gelling agent during injection into poroelastic tissue. This theory is linked to the soft mechanics of tissue deformation during the injection, and to mass transport theory for the API. The model predicts key elements of the injection process, including the pressure field, the soft tissue displacement field, the phase constitution of the precipitating or gelling agent in the tissue, and the concentration distribution of the API in the tissue. The model enables us to explore relationships between these elements and fundamental injection and tissue parameters. This can inform design of optimized injection protocols. Select model predictions include that larger injection volumes do not significantly affect cavity volumes but do lead to faster transport of the API to target tumor tissue. However, although higher flow rates lead to larger cavities – in the absence of tissue fracture, and when injected volume is held constant – they also lead to slower delivery of the API into the target tumor tissue. This is due to the shorter injection times. Importantly, concentration distributions of the API are not sensitive to the speeds of precipitation of the precipitating agents or to diffusion coefficients of the API in the dense (gelled) phase of the injectate material. The model presented here enables first-pass exploration of injection parameter space for select tissue types (properties). This can aid in optimization of localized therapeutic injections in a range of applications.
Item Open Access Mechanistic Models of Anti-HIV Microbicide Drug Delivery(2016) Gao, YajingA new modality for preventing HIV transmission is emerging in the form of topical microbicides. Some clinical trials have shown some promising results of these methods of protection while other trials have failed to show efficacy. Due to the relatively novel nature of microbicide drug transport, a rigorous, deterministic analysis of that transport can help improve the design of microbicide vehicles and understand results from clinical trials. This type of analysis can aid microbicide product design by helping understand and organize the determinants of drug transport and the potential efficacies of candidate microbicide products.
Microbicide drug transport is modeled as a diffusion process with convection and reaction effects in appropriate compartments. This is applied here to vaginal gels and rings and a rectal enema, all delivering the microbicide drug Tenofovir. Although the focus here is on Tenofovir, the methods established in this dissertation can readily be adapted to other drugs, given knowledge of their physical and chemical properties, such as the diffusion coefficient, partition coefficient, and reaction kinetics. Other dosage forms such as tablets and fiber meshes can also be modeled using the perspective and methods developed here.
The analyses here include convective details of intravaginal flows by both ambient fluid and spreading gels with different rheological properties and applied volumes. These are input to the overall conservation equations for drug mass transport in different compartments. The results are Tenofovir concentration distributions in time and space for a variety of microbicide products and conditions. The Tenofovir concentrations in the vaginal and rectal mucosal stroma are converted, via a coupled reaction equation, to concentrations of Tenofovir diphosphate, which is the active form of the drug that functions as a reverse transcriptase inhibitor against HIV. Key model outputs are related to concentrations measured in experimental pharmacokinetic (PK) studies, e.g. concentrations in biopsies and blood. A new measure of microbicide prophylactic functionality, the Percent Protected, is calculated. This is the time dependent volume of the entire stroma (and thus fraction of host cells therein) in which Tenofovir diphosphate concentrations equal or exceed a target prophylactic value, e.g. an EC50.
Results show the prophylactic potentials of the studied microbicide vehicles against HIV infections. Key design parameters for each are addressed in application of the models. For a vaginal gel, fast spreading at small volume is more effective than slower spreading at high volume. Vaginal rings are shown to be most effective if inserted and retained as close to the fornix as possible. Because of the long half-life of Tenofovir diphosphate, temporary removal of the vaginal ring (after achieving steady state) for up to 24h does not appreciably diminish Percent Protected. However, full steady state (for the entire stromal volume) is not achieved until several days after ring insertion. Delivery of Tenofovir to the rectal mucosa by an enema is dominated by surface area of coated mucosa and whether the interiors of rectal crypts are filled with the enema fluid. For the enema 100% Percent Protected is achieved much more rapidly than for vaginal products, primarily because of the much thinner epithelial layer of the mucosa. For example, 100% Percent Protected can be achieved with a one minute enema application, and 15 minute wait time.
Results of these models have good agreement with experimental pharmacokinetic data, in animals and clinical trials. They also improve upon traditional, empirical PK modeling, and this is illustrated here. Our deterministic approach can inform design of sampling in clinical trials by indicating time periods during which significant changes in drug concentrations occur in different compartments. More fundamentally, the work here helps delineate the determinants of microbicide drug delivery. This information can be the key to improved, rational design of microbicide products and their dosage regimens.
Item Open Access Modeling Ambiguity: An Analysis of the Paris Temple(2019) Carrillo, Alan RicardoThe Paris Temple is a monument that has been lost since the start of the 19th century. This thesis aims to digitally reconstruct this monument in a new virtual environment in order to explore the value of digital modeling and mapping. Asking: can we consider these tools effective or not when attempting to reconcile incommensurable historical evidence on spaces that have been either destroyed or transformed? The thesis first reviews the current state of scholarship, in conjunction with the use of digital techniques, surrounding both the Order of the Knights Templar and medieval architecture as a whole.
Through a synthesis of both analog and digital methods a new perspective can be reached. Mapping in this project is only used to contextualize the Paris Temple in the entirety of the Templar Network that spread across Europe. ESRI’s ArcGIS was the mapping tool used to make this map, and a combination of Vectorworks and Autodesk Fusion 360 were used to make the Paris Temple’s model. With these digital techniques the scale of the historical evidence is able to be manipulated in three different ways: in its capacity, temporal qualities, and proximity to the object. Through this manipulation and essential modeling a more holistic understanding of the site was reached.
Item Open Access Modeling Temperature Dependence in Marangoni-driven Thin Films(2015) Potter, Harrison DavidThin liquid films are often studied by reducing the Navier-Stokes equations
using Reynolds lubrication theory, which leverages a small aspect ratio
to yield simplified governing equations. In this dissertation a plate
coating application, in which polydimethylsiloxane coats a silicon substrate,
is studied using this approach. Thermal Marangoni stress
drives fluid motion against the resistance of gravity, with the parameter
regime being chosen such that these stresses lead to a stable advancing front.
Additional localized thermal Marangoni stress is used to control the thin film;
in particular, coating thickness is modulated through the intensity of such
localized forcing. As thermal effects are central to film dynamics, the dissertation
focuses specifically on the effect that incorporating temperature dependence
into viscosity, surface tension, and density has on film dynamics and control.
Incorporating temperature dependence into viscosity, in particular,
leads to qualitative changes in film dynamics.
A mathematical model is developed in which the temperature dependence
of viscosity and surface tension is carefully taken into account.
This model is then
studied through numerical computation of solutions, qualitative analysis,
and asymptotic analysis. A thorough comparison is made between the
behavior of solutions to the temperature-independent and
temperature-dependent models. It is shown that using
localized thermal Marangoni stress as a control mechanism is feasible
in both models. Among constant steady-state solutions
there is a unique such solution in the temperature-dependent model,
but not in the temperature-independent model, a feature that
better reflects the known dynamics of the physical system.
The interaction of boundary conditions with finite domain size is shown
to generate both periodic and finite-time blow-up solutions, with
qualitative differences in solution behavior between models.
This interaction also accounts for the fact that locally perturbed solutions,
which arise when localized thermal Marangoni forcing is too weak
to effectively control thin film thickness, exist only for a discrete
set of boundary heights.
Modulating the intensity of localized thermal Marangoni forcing is
an effective means of modulating the thickness of a thin film
for a plate coating application; however, such control must be initiated before
the film reaches the full thickness it would reach in the absence of
such localized forcing. This conclusion holds for both the temperature-independent
and temperature-dependent mathematical models; furthermore, incorporating
temperature dependence into viscosity causes qualitative changes in solution
behavior that better align with known features of the underlying physical system.
Item Open Access Modeling Temporal and Spatial Data Dependence with Bayesian Nonparametrics(2010) Ren, LuIn this thesis, temporal and spatial dependence are considered within nonparametric priors to help infer patterns, clusters or segments in data. In traditional nonparametric mixture models, observations are usually assumed exchangeable, even though dependence often exists associated with the space or time at which data are generated.
Focused on model-based clustering and segmentation, this thesis addresses the issue in different ways, for temporal and spatial dependence.
For sequential data analysis, the dynamic hierarchical Dirichlet process is proposed to capture the temporal dependence across different groups. The data collected at any time point are represented via a mixture associated with an appropriate underlying model; the statistical properties of data collected at consecutive time points are linked via a random parameter that controls their probabilistic similarity. The new model favors a smooth evolutionary clustering while allowing innovative patterns to be inferred. Experimental analysis is performed on music, and may also be employed on text data for learning topics.
Spatially dependent data is more challenging to model due to its spatially-grid structure and often large computational cost of analysis. As a non-parametric clustering prior, the logistic stick-breaking process introduced here imposes the belief that proximate data are more likely to be clustered together. Multiple logistic regression functions generate a set of sticks with each dominating a spatially localized segment. The proposed model is employed on image segmentation and speaker diarization, yielding generally homogeneous segments with sharp boundaries.
In addition, we also consider a multi-task learning with each task associated with spatial dependence. For the specific application of co-segmentation with multiple images, a hierarchical Bayesian model called H-LSBP is proposed. By sharing the same mixture atoms for different images, the model infers the inter-similarity between each pair of images, and hence can be employed for image sorting.
Item Open Access Multisensory Integration, Segregation, and Causal Inference in the Superior Colliculus(2020) Mohl, Jeffrey ThomasThe environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, unifying these senses requires assigning particular signals to the same or different underlying objects or events. Sensory signals originating from the same source should be integrated together, while signals originating from separate sources should be segregated from one another. Each of these computations is associated with different neural encoding strategies, and it is unknown how these strategies interact. Here, we begin to characterize how this problem is solved in the primate brain. First, we developed a behavioral paradigm and applied a computational modeling approach to demonstrate that monkeys, like humans, implement a form of Bayesian causal inference to decide whether two stimuli (one auditory and one visual) originated from the same source. We then recorded single unit neural activity from a representative multisensory brain region, the superior colliculus (SC), while monkeys performed this task. We found that SC neurons encoded either segregated unisensory or integrated multisensory target representations in separate sub-populations of neurons. These responses were well described by a weighted linear combination of unisensory responses which did not account for spatial separation between targets, suggesting that SC sensory responses did not immediately discriminate between common cause and separate cause conditions as predicted by Bayesian causal inference. These responses became less linear as the trial progressed, hinting that such a causal inference may evolve over time. Finally, we implemented a single trial analysis method to determine whether the observed linearity was indicative of true weighted combinations on each trial, or whether this observation was an artifact of pooling data across trials. We found that initial sensory responses (0-150 ms) were well described by linear models even at the single trial level, but that later sustained (150-600 ms) and saccade period responses were instead better described as fluctuating between encoding either the auditory or visual stimulus alone. We also found that these fluctuations were correlated with behavior, suggesting that they may reflect a convergence from the SC encoding all potential targets to preferentially encoding only a specific target on a given trial. Together, these results demonstrate that non-human primates (like humans) perform an idealized version of Bayesian causal inference, that this inference may depend on separate sub-populations of neurons maintaining either integrated or segregated stimulus representations, and that these responses then evolve over time to reflect more complex encoding rules.
Item Open Access On the Advancement of Probabilistic Models of Decompression Sickness(2016) Hada, Ethan AlexanderThe work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.
The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.
We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.
Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.
Item Open Access On the Significance of Stimulus Waveform in the Modulation of Oscillatory Activity in Excitable Tissues(2021) Eidum, Derek MitchellElectrical stimulation can influence the natural rhythms of activity in heart and brain tissue and has numerous applications in the treatment of cardiac and neurological conditions. The design and optimization of electrical stimulus treatments relies on the ability of researchers to predict the physiological responses of the target tissue to external stimulation. These responses vary greatly depending on the stimulus waveform and parameters as well as the state of ongoing activity in the target region, in ways that are not yet fully understood. The objective of this dissertation is to examine the theoretical basis for differential responses to rhythmic external stimulation based on the properties of the stimulus and target tissue and provide insights for future stimulus technique design.Synaptic plasticity plays a key role in neurostimulation as it allows for the effects of stimulus treatments to persist long after the stimulus ends. Rhythmic stimulation can entrain natural neural oscillations and produce persistent changes in the frequency content of neural activity. However, the mechanisms behind these changes are largely unknown. To this end, simple neural oscillator models were constructed in order to examine the role of synaptic plasticity and sinusoidal stimulation on the synchronization between oscillating regions. Sinusoidal stimulation of different frequencies and strengths can disrupt the intrinsic patterns of network activity, causing information to propagate through the network via different synaptic paths. These new pathways are reinforced through spike timing dependent plasticity, fundamentally altering the network behavior post-stimulation. The resulting network activity depends on the stimulus strength and frequency as well as the intrinsic frequencies of the neural oscillators and the strength of inter-oscillator coupling. Additionally, the effects of rhythmic stimulation depend on the spatial properties of the applied stimulus. By applying out-of-phase sinusoidal current to transverse pairs of electrodes, electric fields may be generated which maintain an approximately fixed strength but rotate in space. Rotational fields may provide utility in the modulation of spiral wave dynamics in excitable tissues, which are associated with reentrant cycles in cardiac arrythmias as well as a number of processes within the brain. To explore this, spiral waves were generated in computational models of engineered excitable tissue and were subjected to rotating and sinusoidal electric fields of varying strength and frequency. Rotational fields which match the direction of spiral propagation provide significant efficiency gains in entraining spiral frequency when compared to sinusoidal stimulus, while retrograde rotational fields can reverse the direction of spiral propagation. Even in the absence of spiral wave dynamics, rotational field stimulation may provide utility in the modulation of neural oscillations. The response of a neuron external stimulation depends on its orientation relative to the electric field gradient, which gives rise to orientation-dependent responses to stimulus treatments. Rotational fields may therefore improve neurostimulus efficacy by influencing the excitability of neurons regardless of their orientation. To explore how rotating fields influence neural oscillations, two neural network model architectures were utilized: large-scale bursting networks, and networks of linked idealized oscillators with plastic inter-oscillator connections. Networks were subjected to rotational and sinusoidal fields, and their behaviors were measured as a function of stimulus strength, frequency, and orientation, as well as the degree of axonal alignment within the network. In spatially aligned networks, rotational fields entrain oscillations and promote network synchrony regardless of orientation, whereas the effects of sinusoidal fields exhibit strong orientation-dependence. In spatially disordered networks, however, rotational fields promote activity in different neurons at different stimulus phases, resulting in reduced network synchrony. These findings expand our knowledge on the significance of stimulus waveform in the modulation of electrically excitable tissues. The ability to understand and predict physiological responses to stimulation will open new doors in the design and optimization of stimulus techniques to achieve desired outcomes.
Item Open Access Radiomics on Spatial-Temporal Manifolds via Fokker-Planck Dynamics(2023) Stevens, JackThe purpose of this work was to develop a new radiomics paradigm for sparse, time-series imaging data, where features are extracted from a spatial-temporal manifold modeling the time evolution between images, and to assess the prognostic value on patients with oropharyngeal cancer (OPC).To accomplish this, we developed an algorithm to mathematically describe the relationship between two images acquired at time t=0 and t>0. These images serve as boundary conditions of a partial differential equation describing the transition from one image to the other. To solve this equation, we propagate the position and momentum of each voxel according to Fokker-Planck dynamics (i.e., a technique common in statistical mechanics). This transformation is driven by an underlying potential force uniquely determined by the equilibrium image. The solution generates a spatial-temporal manifold (3 spatial dimensions + time) from which we define dynamic radiomic features. First, our approach was numerically verified by stochastically sampling dynamic Gaussian processes of monotonically decreasing noise. The transformation from high to low noise was compared between our Fokker-Planck estimation and simulated ground-truth. To demonstrate feasibility and clinical impact, we applied our approach to 18F-FDG-PET images to estimate early metabolic response of patients (n=57) undergoing definitive (chemo)radiation for OPC. Images were acquired pre-treatment and two-weeks intra-treatment (after 20 Gy). Dynamic radiomic features capturing changes in texture and morphology were then extracted. Patients were partitioned into two groups based on similar dynamic radiomic feature expression via k-means clustering and compared by Kaplan-Meier analyses with log-rank tests (p<0.05). These results were compared to conventional delta radiomics to test the added value of our approach. Numerical results confirmed our technique can recover image noise characteristics given sparse input data as boundary conditions. Our technique was able to model tumor shrinkage and metabolic response. While no delta radiomics features proved prognostic, Kaplan-Meier analyses identified nine significant dynamic radiomic features. The most significant feature was Gray-Level-Size-Zone-Matrix gray-level variance (p=0.011), which demonstrated prognostic improvement over its corresponding delta radiomic feature (p=0.722). We developed, verified, and demonstrated the prognostic value of a novel, physics-based radiomics approach over conventional delta radiomics via data assimilation of quantitative imaging and differential equations.
Item Open Access Radium Isotopes as Tracers of Groundwater-Surface Water Interactions in Inland Environments(2011) Raanan Kiperwas, HadasGroundwater has an important role in forging the composition of surface water, supplying nutrients crucial for the development of balanced ecosystems and potentially introducing contaminants into otherwise pristine surface water. Due to water-rock interactions radium (Ra) in groundwater is typically much more abundant than in surface water. In saline environments Ra is soluble and is considered a conservative tracer (apart for radioactive decay) for Ra-rich groundwater seepage. Hence in coastal environments, where mostly fresh groundwater seep into saline surface water, Ra has been the prominent tracer for tracking and modeling groundwater seepage over more than three decades. However, due to its reactivity and non-conservative behavior, Ra is rarely used for tracing groundwater seepage into fresh or hypersaline surface water; in freshwater, Ra is lost mostly through adsorption onto sediments and suspended particles; in hypersaline environments Ra can be removed through co-precipitation, most notably with sulfate salts.
This work examines the use of Ra as a tracer for groundwater seepage into freshwater lakes and rivers and into hypersaline lakes. The study examines groundwater-surface water interactions in four different environments and salinity ranges that include (1) saline groundwater discharge into a fresh water lake (the Sea of Galilee, Israel); (2) modification of pore water transitioning from saline to freshwater along their flow through sediments (pore water in sediments underlying the Sea of Galilee, Israel); (3) fresh groundwater discharge into hypersaline lakes (Sand Hills, Nebraska); and (4) fresh groundwater discharge into a fresh water river (Neuse River, North Carolina). In addition to measurement of the four Ra isotopes (226Ra, 228Ra, 223Ra, 224Ra), this study integrates geochemical (major and trace elements) with additional isotopic tools (strontium and boron isotopes) to better understand the geochemistry associated with the seepage process. To better understand the critical role of salinity on Ra adsorption, this study includes a series of adsorption experiments. The results of these experiments show that Ra loss through adsorption decreases with increasing salinity, and diminishes in salinity as low as ~5% of the salinity of seawater.
Integration of the geochemical data with mass-balance models corrected for adsorption allows estimating groundwater seepage into the Sea of Galilee (Israel) and the Neuse River (North Carolina). A study of the pore water underlying the Sea of Galilee shows significant modifications to the geochemistry and Ra activity of the saline pore water percolating through the sediments underlying the lake. In high salinity environments such as the saline lakes of the Nebraska Sand Hills, Ra is shown to be removed through co-precipitation with sulfate minerals, its integration into barite (BaSO4) is shown to be limited by the ratio of Ra:Ba in the precipitating barite.
Overall, this work demonstrates that Ra is a sensitive tracer for quantifying groundwater discharge even in low-saline environments. Yet the high reactivity of Ra (adsorption, co-precipitation, production of the short-lived isotopes) requires a deep understanding of the geochemical processes that shape and control Ra abundances in water resources.
Item Open Access Reducing Uncertainty in The Biosphere-Atmsophere Exchange of Trace Gases(2010) Novick, Kimberly AnnA large portion of the anthropogenic emissions of greenhouse gases (GHGs) are cycled through the terrestrial biosphere. Quantifying the exchange of these gases between the terrestrial biosphere and the atmosphere is critical to constraining their atmospheric budgets now and in the future. These fluxes are governed by biophysical processes like photosynthesis, transpiration, and microbial respiratory processes which are driven by factors like meteorology, disturbance regimes, and long term climate and land cover change. These complex processes occur over a broad range of temporal (seconds to decades) and spatial (millimeters to kilometers) scales, necessitating the application of simplifying models to forecast fluxes at the scales required by climate mitigation and adaptation policymakers.
Over the long history of biophysical research, much progress has been made towards developing appropriate models for the biosphere-atmosphere exchange of GHGs. Many processes are well represented in model frameworks, particularly at the leaf scale. However, some processes remain poorly understood, and models do not perform robustly over coarse spatial scales and long time frames. Indeed, model uncertainty is a major contributor to difficulties in constraining the atmospheric budgets of greenhouse gases.
The central objective of this dissertation is to reduce uncertainty in the quantification and forecasting of the biosphere-atmosphere exchange of greenhouse gases by addressing a diverse array of research questions through a combination of five unique field experiments and modeling exercises. In this first chapter, nocturnal evapotranspiration -- a physiological process which had been largely ignored until recent years -- is quantified and modeled in three unique ecosystems co-located in central North Carolina, U.S.A. In the second chapter, more long-term drivers of evapotranspiration are explored by developing and testing theoretical relationships between plant water use and hydraulic architecture that may be readily incorporated into terrestrial ecosystem models. The third chapter builds on this work by linking key parameters of carbon assimilation models to structural and climatic indices that are well-specified over much of the land surface in an effort to improve model parameterization schemes. The fourth chapter directly addresses questions about the interaction between physiological carbon cycling and disturbance regimes in current and future climates, which are generally poorly represented in terrestrial ecosystem models. And the last chapter explores effluxes of methane and nitrous oxide (which are historically understudied) in addition to CO2 exchange in a large temperate wetland ecosystem (which is an historically understudied biome). While these five case studies are somewhat distinct investigations, they all: a) are all grounded in the principles of biophysics, b) rely on similar measurement and mathematical modeling techniques, and c) are conducted under the governing objective of reducing measurement and model uncertainty in the biosphere-atmosphere exchange of greenhouse gases.