Browsing by Subject "Applied mathematics"
Results Per Page
Sort Options
Item Open Access A Class of Tetrahedral Finite Elements for Complex Geometry and Nonlinear Mechanics: A Variational Multiscale Approach(2019) Abboud, NabilIn this work, a stabilized finite element framework is developed to simulate small and large deformation solid mechanics problems involving complex geometries and complicated constitutive models. In particular, the focus is on solid dynamics problems involving nearly and fully incompressible materials. The work is divided into three main themes, the first is concerned with the development of stabilized finite element algorithms for hyperelastic materials, the second handles the case of viscoelastic materials, and the third focuses on algorithms for J2-plastic materials. For all three cases, problems in the small and large deformation regime are considered, and for the J2-plasticity case, both quasi-static and dynamic problems are examined.
Some of the key features of the algorithms developed in this work is the simplicity of their implementation into an existing finite element code, and their applicability to problems involving complicated geometries. The former is achieved by using a mixed formulation of the solid mechanics equations where the velocity and pressure unknowns are represented by linear shape functions, whereas the latter is realized by using triangular elements which offer numerous advantages compared to quadrilaterals, when meshing complicated geometries. To achieve the stability of the algorithm, a new approach is proposed in which the variational multiscale approach is applied to the mixed form of the solid mechanics equations written down as a first order system, whereby the pressure equation is cast in rate form.
Through a series of numerical simulations, it is shown that the stability properties of the proposed algorithm is invariant to the constitutive model and the time integrator used. By running convergence tests, the algorithm is shown to be second order accurate, in the $L^2$-nrom, for the displacements, velocities, and pressure. Finally, the robustness of the algorithm is showcased by considering realistic test cases involving complicated geometries and very large deformation.
Item Open Access A Geometric Approach to Biomedical Time Series Analysis(2020) Malik, JohnBiomedical time series are non-invasive windows through which we may observe human systems. Although a vast amount of information is hidden in the medical field's growing collection of long-term, high-resolution, and multi-modal biomedical time series, effective algorithms for extracting that information have not yet been developed. We are particularly interested in the physiological dynamics of a human system, namely the changes in state that the system experiences over time (which may be intrinsic or extrinsic in origin). We introduce a mathematical model for a particular class of biomedical time series, called the wave-shape oscillatory model, which quantifies the sense in which dynamics are hidden in those time series. There are two key ideas behind the new model. First, instead of viewing a biomedical time series as a sequence of measurements made at the sampling rate of the signal, we can often view it as a sequence of cycles occurring at irregularly-sampled time points. Second, the "shape" of an individual cycle is assumed to have a one-to-one correspondence with the state of the system being monitored; as such, changes in system state (dynamics) can be inferred by tracking changes in cycle shape. Since physiological dynamics are not random but are well-regulated (except in the most pathological of cases), we can assume that all of the system's states lie on a low-dimensional, abstract Riemannian manifold called the phase manifold. When we model the correspondence between the hidden system states and the observed cycle shapes using a diffeomorphism, we allow the topology of the phase manifold to be recovered by methods belonging to the field of unsupervised manifold learning. In particular, we prove that the physiological dynamics hidden in a time series adhering to the wave-shape oscillatory model can be well-recovered by applying the diffusion maps algorithm to the time series' set of oscillatory cycles. We provide several applications of the wave-shape oscillatory model and the associated algorithm for dynamics recovery, including unsupervised and supervised heartbeat classification, derived respiratory monitoring, intra-operative cardiovascular monitoring, supervised and unsupervised sleep stage classification, and f-wave extraction (a single-channel blind source separation problem).
Item Open Access A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications(2016) Lee, CurtisThe computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Item Open Access Accelerating the Computation of Density Functional Theory's Correlation Energy under Random Phase Approximations(2019) Thicke, KyleWe propose novel algorithms for the fast computation of density functional theory's exchange-correlation energy in both the particle-hole and particle-particle random phase approximations (phRPA and ppRPA). For phRPA, we propose a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in the density response function by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the interpolative separable density fitting algorithm to further reduce the computational cost in a way analogous to that of the resolution of identity method.
For ppRPA, we propose an algorithm based on stochastic trace estimation. A contour integral is used to break up the dependence between orbitals. The logarithm is expanded into a polynomial, and a variant of the Hutchinson algorithm is proposed to find the trace of the polynomial. This modification of the Hutchinson algorithm allows us to use the structure of the problem to compute each Hutchinson iteration in only quadratic time. This is a large asymptotic improvement over the previous state-of-the-art quartic-scaling method and over the naive sextic-scaling method.
Item Open Access Adaptive Data Representation and Analysis(2018) Xu, JierenThis dissertation introduces and analyzes algorithms that aim to adaptively handle complex datasets arising in the real-world applications. It contains two major parts. The first part describes an adaptive model of 1-dimensional signals that lies in the field of adaptive time-frequency analysis. It explains a current state-of-the-art work, named the Synchrosqueezed transform, in this field. Then it illustrates two proposed algorithms that use non-parametric regression to reveal the underlying os- cillatory patterns of the targeted 1-dimensional signal, as well as to estimate the instantaneous information, e.g., instantaneous frequency, phase, or amplitude func-
tions, by a statistical pattern driven model.
The second part proposes a population-based imaging technique for human brain
bundle/connectivity recovery. It applies local streamlines as novelly adopted learn- ing/testing features to segment the brain white matter and thus reconstruct the whole brain information. It also develops a module, named as the streamline diffu- sion filtering, to improve the streamline sampling procedure.
Even though these two parts are not related directly, they both rely on an align- ment step to register the latent variables to some coordinate system and thus to facilitate the final inference. Numerical results are shown to validate all the pro- posed algorithms.
Item Open Access Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification(2015) Winokur, Justin GregoryPolynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.
Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.
We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.
In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error.
In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.
Item Open Access Algorithms with Applications to Anthropology(2018) Ravier, Robert JamesIn this dissertation, we investigate several problems in shape analysis. We start by discussing the shape matching problem. Given that homeomorphisms of shapes are computed in practice by interpolating sparse correspondence, we give an algorithm to refine pairwise mappings in a collection by employing a simple metric condition to obtain partial correspondences of points chosen in a manner that outlines the shapes of interest in a relatively small number of points. We then use this mapping algorithm in two separate applications. First, we investigate the extent to which classical assumptions and methods in statistical shape analysis hold for near continuous discretizations of surfaces spanning different species and genus groups. We find that these methods yield biologically meaningful information, and that resulting operations with these correspondences, including averaging and linear interpolation, yield biologically meaningful surfaces even for distinct geometries. As a second application, we discuss the problem of dictionary learning on shapes in an effort to find sparse decompositions of shapes in a collection. To this end, we define a construction of wavelet-like and ridgelet-like objects that are easily computable at the level of the discretization, both of which have natural interpretation in the smooth case. We then use these in tandem with feature points to create a sparse dictionary, and show that standard sparsification practices still retain biological information.
Item Open Access An Investigation into the Multiscale Nature of Turbulence and its Effect on Particle Transport(2022) Tom, JosinWe study the effect of the multiscale properties of turbulence on particle transport, specifically looking at the physical mechanisms by which different turbulent flow scales impact the settling speeds of particles in turbulent flows. The average settling speed of small heavy particles in turbulent flows is important for many environmental problems such as water droplets in clouds and atmospheric aerosols. The traditional explanation for enhanced particle settling speeds in turbulence for a one-way coupled (1WC) system is the preferential sweeping mechanism proposed by Maxey (1987, J. Fluid Mech.), which depends on the preferential sampling of the fluid velocity gradient field by the inertial particles. However, Maxey's analysis does not shed light on role of different turbulent flow scales contributing to the enhanced settling, partly since the theoretical analysis was restricted to particles with weak inertia.
In the first part of the work, we develop a new theoretical result, valid for particles of arbitrary inertia, that reveals the multiscale nature of the preferential sweeping mechanism. In particular, the analysis shows how the range of scales at which the preferential sweeping mechanism operates depends on particle inertia. This analysis is complemented by results from Direct Numerical Simulations (DNS) where we examine the role of different flow scales on the particle settling speeds by coarse-graining (filtering) the underlying flow. The results explain the dependence of the particle settling speeds on Reynolds number and show how the saturation of this dependence at sufficiently large Reynolds number depends upon particle inertia. We also explore how particles preferentially sample the fluid velocity gradients at various scales and show that while rapidly settling particles do not preferentially sample the fluid velocity gradients, they do preferentially sample the fluid velocity gradients coarse-grained at scales outside of the dissipation range.
Inspired by our finding that the effectiveness of the preferential sweeping mechanism depends on how particles interact with the strain and vorticity fields at different scales, we next shed light on the multiscale dynamics of turbulence by exploring the properties of the turbulent velocity gradients at different scales. We do this by analyzing the evolution equations for the filtered velocity gradient tensor (FVGT) in the strain-rate eigenframe. However, the pressure Hessian and viscous stress are unclosed in this frame of reference, requiring in-depth modelling. Using data from DNS of the forced Navier-Stokes equation, we consider the relative importance of local and non-local terms in the FVGT eigenframe equations across the scales using statistical analysis. We show that the anisotropic pressure Hessian (which is one of the unclosed terms) exhibits highly non-linear behavior at low values of normalized local gradients, with important modeling implications. We derive a generalization of the classical Lumley triangle that allows us to show that the pressure Hessian has a preference for two-component axisymmetric configurations at small scales, with a transition to a more isotropic state at larger scales. We also show that the current models fail to capture a number of subtle features observed in our results and provide useful guidelines for improving Lagrangian models of the FVGT.
In the final part of the work, we look at how two-way coupling (2WC) modifies the multiscale preferential sweeping mechanism. We comment on the the applicability of the theoretical analysis developed in the first part of the work for 2WC flows. Monchaux & Dejoan (2017, Phys. Rev. Fluids) showed using DNS that while for low particle loading the effect of 2WC on the global flow statistics is weak, 2WC enables the particles to drag the fluid in their vicinity down with them, significantly enhancing their settling, and they argued that two-way coupling suppresses the preferential sweeping mechanism. We explore this further by considering the impact of 2WC on the contribution made by eddies of different sizes on the particle settling. In agreement with Monchaux & Dejoan, we show that even for low loading, 2WC strongly enhances particle settling. However, contrary to their study, we show that preferential sweeping remains important in 2WC flows. In particular, for both 1WC and 2WC flows, the settling enhancement due to turbulence is dominated by contributions from particles in straining regions of the flow, but for the 2WC case, the particles also drag the fluid down with them, leading to an enhancement of their settling compared to the 1WC case. Overall, the novel results presented here not only augments the current understanding of the different physical mechanisms in producing enhanced settling speeds from a fundamental physics perspective, but can also be used to improve predictive capabilities in large-scale atmospheric modeling.
Item Open Access An Unbalanced Optimal Transport Problem with a Growth Constraint(2024) Dai, YuqingIn this paper, we introduce several unbalanced optimal transport problems between two Radon measures with different total masses. Initially, we explore a generalization of the Benamou-Brenier problem, incorporating a growth constraint to accommodate a non-decreasing total mass during transportation. This leads to the formulation of a modified Hellinger-Kantorovich (mHK) problem. Our investigation reveals quasi-metric properties of this novel problem and characterizes it within a cone setting through a newly defined quasi-cone metric, resulting in an equivalent formulation of the mHK problem. This formulation simplifies the demonstration of the existence of optimal solutions and facilitates explicit calculations for transport problems between two Dirac measures.
A significant advancement in our work is the construction of a dual problem for the mHK problem, a topic previously unexplored. We confirm the duality and identify optimality conditions for transport plans, successfully deriving a one-to-one (Monge) map under certain regularity conditions for the initial measure. Furthermore, we propose a dynamic formulation for the mHK problem within a cone setting, focusing on minimization over dynamic plans involving absolutely continuous curves between cone points. This approach not only projects a dynamic plan onto an absolutely continuous curve between initial and target measures but also establishes a close relationship with solutions to continuity equations.
Motivated by dynamic models of biological growth, our study extends to practical applications, providing an equivalent convex formulation of the mHK problem and developing numerical schemes based on the Douglas-Rachford algorithm and the Alternating Direction Method of Multipliers algorithm. We apply these schemes to synthetic data, demonstrating the utility of our theoretical findings.
Item Open Access Analytical and Numerical Study of Lindblad Equations(2020) Cao, YuLindblad equations, since introduced in 1976 by Lindblad, and by Gorini, Kossakowski, and Sudarshan, have received much attention in many areas of scientific research. Around the past fifty years, many properties and structures of Lindblad equations have been discovered and identified. In this dissertation, we study Lindblad equations from three aspects: (I) physical perspective; (II) numerical perspective; and (III) information theory perspective.
In Chp. 2, we study Lindblad equations from the physical perspective. More specifically, we derive a Lindblad equation for a simplified Anderson-Holstein model arising from quantum chemistry. Though we consider the classical approach (i.e., the weak coupling limit), we provide more explicit scaling for parameters when the approximations are made. Moreover, we derive a classical master equation based on the Lindbladian formalism.
In Chp. 3, we consider numerical aspects of Lindblad equations. Motivated by the dynamical low-rank approximation method for matrix ODEs and stochastic unraveling for Lindblad equations, we are curious about the relation between the action of dynamical low-rank approximation and the action of stochastic unraveling. To address this, we propose a stochastic dynamical low-rank approximation method. In the context of Lindblad equations, we illustrate a commuting relation between the dynamical low-rank approximation and the stochastic unraveling.
In Chp. 4, we investigate Lindblad equations from the information theory perspective. We consider a particular family of Lindblad equations: primitive Lindblad equations with GNS-detailed balance. We identify Riemannian manifolds in which these Lindblad equations are gradient flow dynamics of sandwiched Rényi divergences. The necessary condition for such a geometric structure is also studied. Moreover, we study the exponential convergence behavior of these Lindblad equations to their equilibria, quantified by the whole family of sandwiched Rényi divergences.
Item Open Access Applications of Topological Data Analysis and Sliding Window Embeddings for Learning on Novel Features of Time-Varying Dynamical Systems(2017) Ghadyali, Hamza MustafaThis work introduces geometric and topological data analysis (TDA) tools that can be used in conjunction with sliding window transformations, also known as delay-embeddings, for discovering structure in time series and dynamical systems in an unsupervised or supervised learning framework. For signals of unknown period, we introduce an intuitive topological method to discover the period, and we demonstrate its use in synthetic examples and real temperature data. Alternatively, for almost-periodic signals of known period, we introduce a metric called Geometric Complexity of an Almost Periodic signal (GCAP), based on a topological construction, which allows us to continuously measure the evolving variation of its periods. We apply this method to temperature data collected from over 200 weather stations in the United States and describe the novel patterns that we observe. Next, we show how geometric and TDA tools can be used in a supervised learning framework. Seizure-detection using electroencephalogram (EEG) data is formulated as a binary classification problem. We define new collections of geometric and topological features of multi-channel data, which utilizes temporal and spatial context of EEG, and show how it results in better overall performance of seizure detection than using the usual time-domain and frequency domain features. Finally, we introduce a novel method to sonify persistence diagrams, and more generally any planar point cloud, using a modified version of the harmonic table. This auditory display can be useful for finding patterns that visual analysis alone may miss.
Item Open Access Asymptotic Analysis and Performance-based Design of Large Scale Service and Inventory Systems(2010) Talay Degirmenci, IsilayMany types of services are provided using some equipment or machines, e.g. transportation systems using vehicles. Designs of these systems require capacity decisions, e.g., the number of vehicles. In my dissertation, I use many-server and conventional heavy-traffic limit theory to derive asymptotically optimal capacity decisions, giving the desired level of delay and availability performance with minimum investment. The results provide near-optimal solutions and insights to otherwise analytically intractable problems.
The dissertation will comprise two essays. In the first essay, &ldquoAsymptotic Analysis of Delay-based Performance Metrics and Optimal Capacity Decisions for the Machine Repair Problem with Spares,&rdquo I study the M/M/R machine repair problem with spares. This system can be represented by a closed queuing network. Applications include fleet vehicles' repair and backup capacity, warranty services' staffing and spare items investment decisions. For these types of systems, customer satisfaction is essential; thus, the delays until replacements of broken units are even more important than delays until the repair initiations of the units. Moreover, the service contract may include conditions on not only the fill rate but also the probability of acceptable delay (delay being less than a specified threshold value).
I address these concerns by expressing delays in terms of the broken-machines process; scaling this process by the number of required operating machines (or the number of customers in the system); and using many-server limit theorems (limits taken as the number of customers goes to infinity) to obtain the limiting expected delay and probability of acceptable delay for both delay until replacement and repair initiation. These results lead to an approximate optimization problem to decide on the repair and backup-capacity investment giving the minimum expected cost rate, subject to a constraint on the acceptable delay probability. Using the characteristics of the scaled broken-machines process, we obtain insights about the relationship between quality of service and capacity decisions. Inspired by the call-center literature, we categorize capacity level choice as efficiency-driven, quality-driven, or quality- and efficiency-driven. Hence, our study extends the conventional call center staffing problem to joint staffing and backup provisioning. Moreover, to our knowledge, the machine-repair problem literature has focused mainly on mean and fill rate measures of performance for steady-state cost analysis. This approach provides complex, nonlinear expressions not possible to solve analytically. The contribution of this essay to the machine-repair literature is the construction of delay-distribution approximations and a near-optimal analytical solution. Among the interesting results, we find that for capacity levels leading to very high utilization of both spares and repair capacity, the limiting distribution of delay until replacement depends on one type of resource only, the repair capacity investment.
In the second essay, &ldquoDiffusion Approximations and Near-Optimal Design of a Make-to-Stock Queue with Perishable Goods and Impatient Customers,&rdquo I study a make-to-stock system with perishable inventory and impatient customers as a two-sided queue with abandonment from both sides. This model describes many consumer goods, where not only spoilage but also theft and damage can occur. We will refer to positive jobs as individual products on the shelf and negative jobs as backlogged customers. In this sense, an arriving negative job provides the service to a waiting positive job, and vice versa. Jobs that must wait in queue before potential matching are subject to abandonment. Under certain assumptions on the magnitude of the abandonment rates and the scaled difference between the two arrival rates (products and customers), we suggest approximations to the system dynamics such as average inventory, backorders, and fill rate via conventional heavy traffic limit theory.
We find that the approximate limiting queue length distribution is a normalized weighted average of two truncated normal distributions and then extend our results to analyze make-to-stock queues with/without perishability and limiting inventory space by inducing thresholds on the production (positive) side of the queue. Finally, we develop conjectures for the queue-length distribution for a non-Markovian system with general arrival streams. We take production rate as the decision variable and suggest near-optimal solutions.
Item Open Access Bayesian Model Uncertainty and Prior Choice with Applications to Genetic Association Studies(2010) Wilson, Melanie AnnThe Bayesian approach to model selection allows for uncertainty in both model specific parameters and in the models themselves. Much of the recent Bayesian model uncertainty literature has focused on defining these prior distributions in an objective manner, providing conditions under which Bayes factors lead to the correct model selection, particularly in the situation where the number of variables, p, increases with the sample size, n. This is certainly the case in our area of motivation; the biological application of genetic association studies involving single nucleotide polymorphisms. While the most common approach to this problem has been to apply a marginal test to all genetic markers, we employ analytical strategies that improve upon these marginal methods by modeling the outcome variable as a function of a multivariate genetic profile using Bayesian variable selection. In doing so, we perform variable selection on a large number of correlated covariates within studies involving modest sample sizes.
In particular, we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally 'validated' in independent studies.
In the context of Bayesian model uncertainty for problems involving a large number of correlated covariates we characterize commonly used prior distributions on the model space and investigate their implicit multiplicity correction properties first in the extreme case where the model includes an increasing number of redundant covariates and then under the case of full rank design matrices. We provide conditions on the asymptotic (in n and p) behavior of the model space prior
required to achieve consistent selection of the global hypothesis of at least one associated variable in the analysis using global posterior probabilities (i.e. under 0-1 loss). In particular, under the assumption that the null model is true, we show that the commonly used uniform prior on the model space leads to inconsistent selection of the global hypothesis via global posterior probabilities (the posterior probability of at least one association goes to 1) when the rank of the design matrix is finite. In the full rank case, we also show inconsistency when p goes to infinity faster than the square root of n. Alternatively, we show that any model space prior such that the global prior odds of association increases at a rate slower than the square root of n results in consistent selection of the global hypothesis in terms of posterior probabilities.
Item Open Access Compressive Sensing in Transmission Electron Microscopy(2018) Stevens, AndrewElectron microscopy is one of the most powerful tools available in observational science. Magnifications of 10,000,000x have been achieved with picometer precision. At this high level of magnification, individual atoms are visible. This is possible because the wavelength of electrons is much smaller than visible light, which also means that the highly focused electron beams used to perform imaging contain significantly more energy than visible light. The beam energy is high enough that it can cause radiation damage to metal specimens. Reducing radiation dose while maintaining image quality has been a central research topic in electron microscopy for several decades. Without the ability to reduce the dose, most organic and biological specimens cannot be imaged at atomic resolution. Fundamental processes in materials science and biology arise at the atomic level, thus understanding these processes can only occur if the observational tools can capture information with atomic resolution.
The primary objective of this research is to develop new techniques for low dose and high resolution imaging in (scanning) transmission electron microscopy (S/TEM). This is achieved through the development of new machine learning based compressive sensing algorithms and microscope hardware for acquiring a subset of the pixels in an image. Compressive sensing allows recovery of a signal from significantly fewer measurements than total signal size (under certain conditions). The research objective is attained by demonstrating application of compressive sensing to S/TEM in several simulations and real microscope experiments. The data types considered are images, videos, multispectral images, tomograms, and 4-dimensional ptychographic data. In the simulations, image quality and error metrics are defined to verify that reducing dose is possible with a small impact on image quality. In the microscope experiments, images are acquired with and without compressive sensing so that a qualitative verification can be performed.
Compressive sensing is shown to be an effective approach to reduce dose in S/TEM without sacrificing image quality. Moreover, it offers increased acquisition speed and reduced data size. Research leading to this dissertation has been published in 25 articles or conference papers and 5 patent applications have been submitted. The published papers include contributions to machine learning, physics, chemistry, and materials science. The newly developed pixel sampling hardware is being productized so that other microscopists can use compressive sensing in their experiments. In the future, scientific imaging devices (e.g., scanning transmission x-ray microscopy (STXM) and secondary-ion mass spectrometry (SIMS)) could also benefit from the techniques presented in this dissertation.
Item Open Access Data Transfer between Meshes for Large Deformation Frictional Contact Problems(2013) Kindo, Temesgen MarkosIn the finite element simulation of problems with contact there arises
the need to change the mesh and continue the simulation on a new mesh.
This is encountered when the mesh has to be changed because the original mesh experiences severe distortion or the mesh is adapted to minimize errors in the solution. In such instances a crucial component is the transfer of data from the old mesh to the new one.
This work proposes a strategy by which such remeshing can be accomplished in the presence of mortar-discretized contact,
focusing in particular on the remapping of contact variables which must occur to make the method robust and efficient.
By splitting the contact stress into normal and tangential components and transferring the normal component as a scalar and the tangential component by parallel transporting on the contact surface an accurate and consistent transfer scheme is obtained. Penalty and augmented Lagrangian formulations are considered. The approach is demonstrated by a number of two and three dimensional numerical examples.
Item Open Access Designing Quantum Channels Induced by Diagonal Gates(2023) Hu, JingzhenThe challenge of quantum computing is to combine error resilience with universal computation. Diagonal gates such as the transversal T gate play an important role in implementing a universal set of quantum operations. We introduce a framework that describes the process of preparing a code state, applying a diagonal physical gate, measuring a code syndrome, and applying a Pauli correction that may depend on the measured syndrome (the average logical channel induced by an arbitrary diagonal gate). The framework describes the interaction of code states and physical gates in terms of generator coefficients determined by the induced logical operator. The interaction of code states and diagonal gates depends on the signs of Z-stabilizers in the CSS code, and the proposed generator coefficient framework explicitly includes this degree of freedom. We derive necessary and sufficient conditions for an arbitrary diagonal gate to preserve the code space of a stabilizer code, and provide an explicit expression of the induced logical operator. When the diagonal gate is a quadratic form diagonal gate, the conditions can be expressed in terms of divisibility of weights in the two classical codes that determine the CSS code. These codes find applications in magic state distillation and elsewhere. When all the signs are positive, we characterize all possible CSS codes, invariant under transversal Z-rotation through π/2^l, that are constructed from classical Reed-Muller codes by deriving the necessary and sufficient constraints on the level l. According to the divisibility conditions, we construct new families of CSS codes using cosets of the first order Reed-Muller code defined by quadratic forms. The generator coefficient framework extends to arbitrary stabilizer codes but the more general class of non-degenerate stabilizer codes does not bring advantages when designing the code parameters.
Relying on the generator coefficient framework, we introduce a method of synthesizing CSS codes that realizes a target logical diagonal gate at some level l in the Clifford hierarchy. The method combines three basic operations: concatenation, removal of Z-stabilizers, and addition of X-stabilizers. It explicitly tracks the logical gate induced by a diagonal physical gate that preserves a CSS code. The first step is concatenation, where the input is a CSS code and a physical diagonal gate at level l inducing a logical diagonal gate at the same level. The output is a new code for which a physical diagonal gate at level l+1 induces the original logical gate. The next step is judicious removal of Z-stabilizers to increase the level of the induced logical operator. We identify three ways of climbing the logical Clifford hierarchy from level l to level l+1, each built on a recursive relation on the Pauli coefficients of the induced logical operators. Removal of Z-stabilizers may reduce distance, and the purpose of the third basic operation, addition of X-stabilizers, is to compensate for such losses. Our approach to logical gate synthesis is demonstrated by two proofs of concept: the [[2^(l+1) − 2, 2, 2]] triorthogonal code family, and the [[2^m, (m choose r) , 2^(min{r, m-r})]] quantum Reed-Muller code family.
Item Open Access Dynamics and Steady-states of Thin Film Droplets on Homogeneous and Heterogeneous Substrates(2019) Liu, WeifanIn this dissertation, we study the dynamics and steady-states of thin liquid films on solid substrates using lubrication equations. Steady-states and bifurcation of thin films on chemically patterned substrates have been previously studied for thin films on infinite domains with periodic boundary conditions. Inspired by previous work, we study the steady-state thin film on a chemically heterogeneous 1-D domain of finite length, subject to no-flux boundary conditions. Based on the structure of the bifurcation diagram, we classify the 1-D steady-state solutions that could exist on such substrates into six different branches and develop asymptotic approximation of steady-states on each branch. We show that using perturbation expansions, the leading order solutions provide a good prediction of steady-state thin film on a stepwise-patterned substrate. We also show that all of the analysis in 1-D can be easily extended to axisymmetric solutions in 2-D, which leads to qualitatively the same results.
Subject to long-wave instability, thin films break up and form droplets. In presence of small fluxes, these droplets move and exchange mass. In 2002, Glasner and Witelski proposed a simplified model that predicts the pressure and position evolution of droplets in 1-D on homogeneous substrates when fluxes are small. While the model is capable of giving accurate prediction of the dynamics of droplets in presence of small fluxes, the model becomes less accurate as fluxes increase. We present a refined model that computes the pressure and position of a single droplet on a finite domain. Through numerical simulations, we show that the refined model captures single-droplet dynamics with higher accuracy than the previous model.
Item Open Access Efficient Algorithms for High-dimensional Eigenvalue Problems(2020) Wang, ZheThe eigenvalue problem is a traditional mathematical problem and has a wide applications. Although there are many algorithms and theories, it is still challenging to solve the leading eigenvalue problem of extreme high dimension. Full configuration interaction (FCI) problem in quantum chemistry is such a problem. This thesis tries to understand some existing algorithms of FCI problem and propose new efficient algorithms for the high-dimensional eigenvalue problem. In more details, we first establish a general framework of inexact power iteration and establish the convergence theorem of full configuration interaction quantum Monte Carlo (FCIQMC) and fast randomized iteration (FRI). Second, we reformulate the leading eigenvalue problem as an optimization problem, then compare the show the convergence of several coordinate descent methods (CDM) to solve the leading eigenvalue problem. Third, we propose a new efficient algorithm named Coordinate Descent Full Configuration Interaction (CDFCI) based on coordinate descent methods to solve the FCI problem, which produces some state-of-the-art results. Finally, we conduct various numerical experiments to fully test the algorithms.
Item Open Access EM Scattering from Perforated Films: Transmission and Resonance(2012) Jackson, Aaron DavidWe calculate electromagnetic transmission through periodic gratings using a mode-matching method for solving Maxwell's equations. We record the derivation of the equations involved for several variations of the problem, including one- and two- dimensionally periodic films, one-sided films, films with complicated periodicity, and a simpler formula for the case of a single contributing waveguide mode. We demonstrate the effects of the Rayleigh anomaly, which causes energy transmission to be very low compared to nearby frequencies, and the associated transmission maxima which may be as high as 100% for certain energy frequencies. Finally we present further variations of the model to account for the effects of conductivity, finite hole arrays, and collimation. We find that assuming the film is perfectly conducting with infinite periodicity does not change the transmission sufficiently to explain the difference between experimental and theoretical results. However, removing the assumption that the incident radiation is in the form of a plane wave brings the transmission much more in agreement with experimental results.
Item Open Access Estimating the Intrinsic Dimension of High-Dimensional Data Sets: A Multiscale, Geometric Approach(2011) Little, Anna VictoriaThis work deals with the problem of estimating the intrinsic dimension of noisy, high-dimensional point clouds. A general class of sets which are locally well-approximated by k dimensional planes but which are embedded in a D>>k dimensional Euclidean space are considered. Assuming one has samples from such a set, possibly corrupted by high-dimensional noise, if the data is linear the dimension can be recovered using PCA. However, when the data is non-linear, PCA fails, overestimating the intrinsic dimension. A multiscale version of PCA is thus introduced which is robust to small sample size, noise, and non-linearities in the data.
- «
- 1 (current)
- 2
- 3
- »