# Browsing by Subject "Uncertainty quantification"

###### Results Per Page

###### Sort Options

Item Open Access ADAPTIVE LOCAL REDUCED BASIS METHOD FOR RISK-AVERSE PDE CONSTRAINED OPTIMIZATION AND INVERSE PROBLEMS(2018) Zou, ZilongMany physical systems are modeled using partial dierential equations (PDEs) with uncertain or random inputs. For such systems, naively propagating a xed number of samples of the input probability law (or an approximation thereof) through the PDE is often inadequate to accurately quantify the risk associated with critical system responses. In addition, to manage the risk associated with system response and devise risk-averse controls for such PDEs, one must obtain the numerical solution of a risk-averse PDE-constrained optimization problem, which requires substantial computational eorts resulting from the discretization of the underlying PDE in both the physical and stochastic dimensions.

Bayesian Inverse problem, where unknown system parameters need to be inferred from some noisy data of the system response, is another important class of problems that suffer from excessive computational cost due to the discretization of the underlying PDE. To accurately characterize the inverse solution and quantify its uncertainty, tremendous computational eorts are typically required to sample from the posterior distribution of the system parameters given the data. Surrogate approximation of the PDE model is an important technique to expedite the inference process and tractably solve such problems.

In this thesis, we develop a goal-oriented, adaptive sampling and local reduced basis approximation for PDEs with random inputs. The method, which we denote by local RB, determines a set of samples and an associated (implicit) Voronoi partition of the parameter domain on which we build local reduced basis approximations of the PDE solution. The local basis in a Voronoi cell is composed of the solutions at a xed number of closest samples as well as the gradient information in that cell. Thanks to the local nature of the method, computational cost of the approximation does not increase as more samples are included in the local RB model. We select the local RB samples in an adaptive and greedy manner using an a posteriori error indicator based on the residual of the approximation.

Additionally, we modify our adaptive sampling process using an error indicator that is specifically targeted for the approximation of coherent risk measures evaluated at quantities of interest depending on PDE solutions. This allow us to tailor our method to efficiently quantify the risk associated with the system responses. We then combine our local RB method with an inexact trust region method to eciently solve risk-averse optimization problems with PDE constraints. We propose a numerical framework for systematically constructing surrogate models for the trust-region subproblem and the objective function using local RB approximations.

Finally, we extend our local RB method to eciently approximate the Gibbs posterior distribution for inverse problems under uncertainty. The local RB method is employed to construct a cheap surrogate model for the loss function in the Gibbs posterior formula. To improve the accuracy of the surrogate approximation, we adopt a Sequential Monte Carlo framework to guide the progressive and adaptive construction of the local RB surrogate. The resulted method provides subjective and ecient inference of unknown system parameters under general distribution and noise assumptions.

We provide theoretical error bounds for our proposed local RB method and its extensions, and numerically demonstrate the performance of our methods through various examples.

Item Open Access Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification(2015) Winokur, Justin GregoryPolynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.

Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.

We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.

In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error.

In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.

Item Open Access Deep Learning Based Uncertainty Quantification for Improving Clinical Diagnostic Tools(2023) Jin, Felix QiaochuDeep learning methods have impacted a wide number of fields, and interest in its applications to clinical medicine continues to grow. Interpretable and uncertainty-aware models are critical for the adoption of artificial intelligence and machine learning in medicine, and explicit uncertainty quantification methods are used in this work to train deep neural networks that output an uncertainty value. This dissertation investigates the application of explicit uncertainty quantification with deep learning to tackle data processing problems in tympanometry, ultrasound shear wave elasticity (SWE) imaging, and ultrasound B-mode imaging.To facilitate layperson-guided tympanometry, Chapter 2 describes an uncertainty-aware hybrid deep learning model that classifies tympanograms into types A (normal), B (effusion/perforation), and C (retraction), trained using the audiologist’s interpretation as gold standard. The dataset consisted of 4810 pairs of narrow-band tympanometry tracings acquired by an audiologist and layperson in school-aged children from a trial in rural Alaska with a high prevalence of infection-related hearing loss. The model used a deep neural network (DNN) to estimate the tympanometric peak pressure, ear canal volume, and associated uncertainties, and then used a three-level decision tree based on these features to determine tympanogram classification For layperson-acquired data, the model achieved a sensitivity of 95.2% (93.3,97.1) and AUC of 0.968 (0.955,0.978). The model’s sensitivity was greater than that of the tympanometer’s built-in software [79.2% (75.5,82.8)] or a set of clinically recommended normative values [56.9% (52.4,61.3)]. For audiologist-acquired data, the model achieved a higher AUC of 0.987 (0.980,0.993) but an equivalent sensitivity of 95.2 (93.3,97.1). This chapter demonstrates that automated tympanogram classification using a hybrid deep learning classifier could facilitate layperson-guided tympanometry in hearing screening programs for children in resource-constrained communities. In ultrasound SWE imaging, a number of algorithms exist for estimating the shear wave speed (SWS) from spatiotemporal displacement data. However, no method provides a well-calibrated and practical uncertainty metric, hindering SWE’s clinical adoption and utility in downstream decision-making. In Chapter 3, a deep learning based SWS estimator is designed to simultaneously produce a quantitative and well-calibrated uncertainty value for each estimate by outputting the two parameters m and σ of a log-normal probability distribution. The working dataset consisted of in vivo 2D-SWE data of the cervix collected from 30 pregnant subjects, with 551 total acquisitions and >2 million sample points. Points were grouped by uncertainty into bins to assess uncertainty calibration: the predicted uncertainty closely matched the root-mean-square error, with an average absolute percent deviation of 3.84%. An ensemble model was created using leave-one-out training that estimated uncertainty with better calibration (1.45%) than any individual ensemble member when tested on a held-out patient’s data. The DNN was applied to an external dataset to evaluate its generalizability, and a real-time implementation was demonstrated on a clinical ultrasound scanner. The trained model, named SweiNet, is shared openly to provide the research community with a fast SWS estimator that also outputs a well-calibrated estimate of the predictive uncertainty. Chapter 4 introduces 3D rotational SWE imaging for characterizing skeletal muscle as an incompressible, transversely isotropic (ITI) material in an effort to assess muscle health and function. To facilitate ongoing research, three tools were developed. First, a Fourier-domain approach is described for calculating 3D muscle fiber orientation (MFO) from 3D B-mode volumes acquired using two imaging setups: 1) a cylindrical volume acquired by rotating a linear transducer, and 2) a rectangular volume acquired by a rectilinear matrix array transducer. Most existing approaches apply only to 2D B-mode images and detect individual fibers to extract the tilt, the angle fibers make with a horizontal plane. In a 3D B-mode volume, spherical coordinates and two angles are needed to describe orientation: the tilt and the rotation angles, where rotation is defined relative a reference vertical plane in the volume. The proposed algorithm was validated on in silico and in vivo data: errors in rotation and tilt were within 1° for both imaging setups and less than the observed in vivo MFO heterogeneity. Second, a versatile Radon-transform based SWS estimator was developed that can accept arbitrary masks to select particular regions in space-time data to isolate the two different shear wave propagation modes that are seen in ITI materials and in in vivo muscle data. Hand-drawn masks were initially used to identify these wave modes. These masks were used to train a DNN to automate mask drawing and alleviate the need for manual processing. The DNN identified 91% of the shear waves, and estimated speeds had an average difference of 7.6%. Third, the wave equation for an ITI material was derived and then solved using physics-informed neural networks (PINNs), a relatively new technique for numerically solving differential equations with advantages of being faster, compressed, analytic, and free of space/time discretization. Presently, simulations of ITI materials require time-consuming finite element modeling (FEM) or Green’s function calculations. This approach took roughly six times less time than an equivalent FEM simulation, and the PINN solution had multiple shear wave modes that matched the FEM to first-order. The PINN solution did not have reflection artifacts seen in the FEM solution. Estimated SWSs had a mean absolute difference of 4.7%. The differences in wave width and amplitude between the two suggest the need to further validate the PINN approach in comparison to FEM and Green’s function methods. In skeletal muscle, the primary SWS as a function of propagation angle forms an ellipse with the major axis oriented in the muscle fiber direction. Estimating the fiber rotation angle from a 3D B-mode volume is useful for SWE data processing, SWS estimation, and ellipse fitting. However, existing algorithms are sensitive to artifacts and can produce gross estimation errors differing ¥45° from the true fiber rotation. In Chapter 5, a DNN is designed and trained to predict fiber rotation angle via parameterizing a von Mises distribution, which provides both the estimated rotation and associated uncertainty. On simulated data with known fiber rotation, the model had an RMSE of 3.5°, and uncertainty closed matched the expected theoretical values when known amounts of fiber heterogeneity were introduced. For in vivo data of the vastus lateralis muscle, the SWS ellipse fit was used as ground truth, and DNN model RMSE was 6.9° compared to 16.9° for the existing Fourier-domain algorithm. The DNN had no estimates with an error <30°. Predicted uncertainty correlated with RMSE, but was smaller by a factor of four. This deep learning approach will provide more accurate and robust fiber rotation estimates for use in shear wave data processing and muscle characterization. In summary, this work demonstrates the effectiveness of deep learning methods for addressing specific data-processing needs of research aimed at developing new clinical applications of tympanometry, ultrasound SWE and B-mode imaging for the diagnosis and monitoring of disease. This work also demonstrates effective uncertainty quantification using the explicit estimation method, and suggests how uncertainty values may be useful for downstream decision making and data processing and potentially as a stand-alone characteristic value.

Item Open Access Influence of Material Properties and Fracture Properties on the Crack Nucleation and Growth(2021) Zeng, BoIn this thesis, we studied the influence of spatial variations in the fracture property and the elastic property on the resulting crack patterns during soil desiccation. Young's modulus is selected as the representative elastic property and the fracture toughness is selected as that for the fracture property. Their well-defined spatially fluctuated random fields are the input of the phase-field fracture simulation, and the resulting damage field is the output. Various postprocessing of the damage field were carried out to analyze the resulting fields. After comparing the morphology of the cracks and fragment size distributions, a preliminary guess was that the two inputs have very close influence on the output. Then the Pearson correlation coefficient, as a first try of sensitivity analysis, also gave an indistinguishable correlation number between the two. A more rigorous approach with highly isolated sensitivity quantity was needed, which brought us to the Sobol' indice based on polynomial chaos expansion, a global sensitivity analysis measure which accounts for the variation of output into the variation of each input and any combination of input.

Item Open Access Model Reduction and Domain Decomposition Methods for Uncertainty Quantification(2017) Contreras, Andres AnibalThis dissertation focuses on acceleration techniques for Uncertainty Quantification (UQ). The manuscript is divided into five chapters. Chapter 1 provides an introduction and a brief summary of Chapters 2, 3, and 4. Chapter 2 introduces a model reduction strategy that is used in the context of elasticity imaging to infer the presence of an inclusion embedded in a soft matrix, mimicking tumors in soft tissues. The method relies on Polynomial Chaos (PC) expansions to build a dictionary of surrogates models, where each surrogate is constructed using a different geometrical configuration of the potential inclusion. A model selection approach is used to discriminate against the different models and eventually select the most appropriate to estimate the likelihood that an inclusion is present in the domain. In Chapter 3, we use a Domain Decomposition (DD) approach to compute the Karhunen-Loeve (KL) modes of a random process through the use of local KL expansions at the subdomain level. Furthermore, we analyze the relationship between the local random variables associated to the local KL expansions and the global random variables associated to the global KL expansions. In Chapter 4, we take advantage of these local random variables and use DD techniques to reduce the computational cost of solving a Stochastic Elliptic Equation (SEE) via a Monte Carlo sampling method. The approach takes advantage of a lower stochastic dimension at the subdomain level to construct a PC expansion of a reduced linear system that is later used to compute samples of the solution. Thus, the approach consists of two main stages: 1) a preprocessing stage in which PC expansions of a condensed problem are computed and 2) a Monte Carlo sampling stage where samples of the solution are computed in order to solve the SEE. Finally, in Chapter 5 some brief concluding remarks are provided.

Item Open Access Multimodal Probabilistic Inference for Robust Uncertainty Quantification(2021) Jerfel, GhassenDeep learning models, which form the backbone of modern ML systems, generalize poorly to small changes to the data distribution. They are also bad at signalling failure, making predictions with high confidence when their training data or fragile assumptions make them unlikely to make reasonable decisions. This lack of robustness makes it difficult to trust their use in safety-critical settings. Accordingly, there is a pressing need to equip models with a notion of uncertainty to understand their failure modes and detect when their decisions cannot be used or require intervention. Uncertainty quantification is thus crucial for ML systems to work consistently on real-world data and fail loudly when they don’t.One growing line of research on uncertainty quantification is probabilistic modelling which is concerned with capturing model uncertainty by placing a distribution over the models which can be marginalized at test-time. This is especially useful in underspecified models which can have diverse near-optimal solutions, at training time, with similar population-level performance. However, probabilistic modelling approaches such as Bayesian neural networks (BNN) do not scale well in terms of memory and runtime and often underperform simple deterministic baselines in terms of accuracy. Furthermore, BNNs underperform deep ensembles as they fail to explore multiple modes, in the loss space, while being effective at capturing uncertainty within a single mode.

In this thesis, we develop multimodal representations of model uncertainty that can capture a diverse set of hypotheses. We first propose a scalable family of BNN priors (and corresponding approximate posteriors) that combine the local (i.e. within-mode) uncertainty with mode averaging to deliver robust and calibrated uncertainty estimates in addition to improving accuracy both in and out of distribution. We then leverage a multimodal representation of uncertainty to modulate the amount of information transfer between tasks in meta-learning. Our proposed framework integrates Bayesian non-parametric mixtures with deep learning to enable NNs to adapt their capacity as more data is observed which is crucial for lifelong learning. Finally, we propose to replace the reverse Kullback-Leibler divergence (RKL), known for its mode-seeking behavior and for underestimating posterior covariance, with the forward KL (FKL) divergence in a theoretically-guided novel inference procedure. This ensures the efficient combination of variational boosting with adaptive importance sampling. The proposed algorithm offers a well-defined compute-accuracy trade-off and is guaranteed to converge to the optimal multimodal variational solution as well as the optimal importance sampling proposal distribution.

Item Open Access On Uncertainty Quantification for Systems of Computer Models(2017) Kyzyurova, KseniaScientific inquiry about natural phenomena and processes are increasingly relying on the use of computer models as simulators of such processes. The challenge of using computer models for scientific investigation is that they are expensive in terms of computational cost and resources. However, the core methodology of fast statistical emulation (approximation) of a computer model overcomes this computational problem.

Complex phenomena and processes are often described not by a single computer model, but by a system of computer models or simulators. Direct emulation of a system of simulators may be infeasible for computational and logistical reasons.

This thesis proposes a statistical framework for fast emulation of systems of computer models and demonstrates its potential for inferential and predictive scientific goals.

The first chapter of the thesis introduces the Gaussian stochastic process (GaSP) emulator of a single simulator and summarizes ideas and findings in the rest of the thesis. The second chapter investigates the possibility of using independent GaSP emulators of computer models for fast construction of emulators of systems of computer models. The resulting approximation to a system of computer models is called the linked emulator. The third chapter discusses the irrelevance of attempting to model multivariate output of a computer model, for the purpose of emulation of that model. The linear model of coregionalization (LMC) is used to demonstrate this irrelevance, from both a theoretical perspective and from simulation studies. The fourth chapter introduces a framework for calibration of a system of computer models, using its linked emulator. The linked emulator allows for development of independent emulators of submodels on their own separately constructed design spaces, thus leading to effective dimension reduction in explored parameter space. The fifth chapter addresses the use of some non-Gaussian emulators, in particular censored and truncated GaSP emulators. The censored emulator is constructed to appropriately account for zero-inflated output of a computer model, arising when there are large regions of the input space for which the computer model output is zero. The truncated GaSP accommodates computer model output that is constrained to appear in a certain region. The linked emulator, for systems of computer models whose individual subemulators are either censored or truncated, is also presented. The last chapter concludes with an exposition of further research directions based on the ideas explored in the thesis.

The methodology developed in this thesis is illustrated by an application to quantification of the hazard from pyroclastic flow from the Soufri\`{e}re Hills Volcano on the island of Montserrat; a case study on prediction of volcanic ash transport and dispersal from the Eyjafjallaj{\"o}kull volcano, Iceland in April 14-16, 2010; and calibration of a vapour-liquid equilibrium model, a submodel of the Aspen Plus \textcopyright~chemical process software for design and deployment of amine-based $\mathrm{CO_2}$ capture systems.

Item Open Access Recent Advances on the Design, Analysis and Decision-making with Expensive Virtual Experiments(2024) Ji, YiWith breakthroughs in virtual experimentation, computer simulation has been replacing physical experiments that are prohibitively expensive or infeasible to perform in a large scale. However, as the system becomes more complex and realistic, such simulations can be extremely time-consuming and simulating the entire parameter space becomes impractical. One solution is computer emulation, which builds a predictive model based on a handful of simulation data. Gaussian process is a popular emulator used in many physics and engineering applications for this purpose. In particular, for complicated scientific phenomena like the Quark-Gluon Plasma, employing a multi-fidelity emulator to pool information from multi-fidelity simulation data may enhance predictive performance while simultaneously reducing simulation costs. In this dissertation, we explore two novel approaches for multi-fidelity Gaussian process modeling. The first model is the Graphical Multi-fidelity Gaussian Process (GMGP) model, which embeds scientific dependencies among multi-fidelity data in a directed acyclic graph (DAG). The second model we present is the Conglomerate Multi-fidelity Gaussian Process (CONFIG) model, applicable to scenarios where the accuracy of a simulator is controlled by multiple continuous fidelity parameters.

Software engineering is another domain relying heavily on virtual experimentation. In order to ensure the robustness of a new software application, it is required to go through extensive testing and validation before production. Such testing is typically carried out through virtual experimentation and can require substantial computing resources, particularly as the system complexity grows. Fault localization is a key step in software testing as it pinpoints root causes of failures based on executed test case outcomes. However, existing fault localization techniques are mostly deterministic and provides limited insight into assessing the probabilistic risk of failure-inducing combinations. To address this limitation, we present a novel Bayesian Fault Localization (BayesFLo) framework for software testing, yielding a principled and probabilistic ranking of suspicious inputs for identifying the root causes of software failures.

Item Open Access Stochastic Modeling of Parametric and Model-Form Uncertainties in Computational Mechanics: Applications to Multiscale and Multimodel Predictive Frameworks(2023) Zhang, HaoUncertainty quantification (UQ) plays a critical role in computational science and engineering. The representation of uncertainties stands at the core of UQ frameworks and encompasses the modeling of parametric uncertainties --- which are uncertainties affecting parameters in a well-known model --- and model-form uncertainties --- which are uncertainties defined at the operator level. Past contributions in the field have primarily focused on parametric uncertainties in idealized environments involving simple state spaces and index sets. On the other hand, the consideration of model-form uncertainties (beyond model error correction) is still in its infancy. In this context, this dissertation aims to develop stochastic modeling approaches to represent these two forms of uncertainties in multiscale and multimodel settings.

The case of spatially-varying geometrical perturbations on nonregular index sets is first addressed. We propose an information-theoretic formulation where a push-forward map is used to induce bounded variations and the latent Gaussian random field is implicitly defined through a stochastic partial differential equation on the manifold defining the surface of interest. Applications to a gyroid structure and patient-specific brain interfaces are presented. We then address operator learning in a multiscale setting where we propose a data-free training method, applied to Fourier neural operators. We investigate the homogenization of random media defined at microscopic and mesoscopic scales. Next, we develop a Riemannian probabilistic framework to capture operator-form uncertainties in the multimodel setting (i.e., when a family of model candidates is available). The proposed methodology combines a proper-orthogonal-decomposition reduced-order model with Riemannian operators ensuring admissibility in the almost sure sense. The framework exhibits several key advantages, including the ability to generate a model ensemble within the convex hull defined by model proposals and to constrain the mean in the Fréchet sense, as well as ease of implementation. The method is deployed to investigate model-form uncertainties in various molecular dynamics simulations on graphene sheets. We finally propose an extension of this framework to systems described by coupled partial differential equations, with emphasis on the phase-field approach to brittle fracture.

Item Open Access Stochastic Modeling of Physical Parameters on Complex Domains, with Applications to 3D Printed Materials(2022) Chu, ShanshanThe proper modeling of uncertainties in constitutive models is a central concern in mechanics of materials and uncertainty quantification. Within the framework of probability theory, this entails the construction of suitable probabilistic models amenable to forward simulations and inverse identification based on limited data. The development of new manufacturing technologies, such as additive manufacturing, and the availability of data at unprecedented levels of resolution raise new challenges related to the integration of geometrical complexity and material inhomogeneity — both aspects being intertwined through processing.

In this dissertation, we address the construction, identification, and validation of stochastic models for spatially-dependent material parameters on nonregular (i.e., nonconvex) domains. We focus on metal additive additive manufacturing, with the aim of closely integrating experimental measurements obtained by collaborators, and consider the randomization of anisotropic linear elastic and plasticity constitutive models. We first present a stochastic modeling framework enabling the definition and sampling of non-Gaussian models on complex domains. The proposed methodology combines a stochastic partial differential approach, which is used to account for geometrical features on the fly, with an information-theoretic construction, which ensures well-posedness in the associated stochastic boundary value problems through the derivation of ad hoc transport maps.

We then present three case studies where the framework is deployed to model uncertainties in location-dependent anisotropic elasticity tensors and reduced Hill’s plasticity coefficients (for 3D printed stainless steel 316L). Experimental observations at various scales are integrated for calibration (either through direct estimators or by solving statistical inverse problems by means of the maximum likelihood method) and validation (whenever possible), including structural responses and multiscale predictions based on microstructure samples. The role of material symmetries is specifically investigated, and it is shown that preserving symmetries is, indeed, key to appropriately capturing statistical fluctuations. Results pertaining to the correlation structure indicate strong anisotropy for both types of behaviors, in accordance with fine-scale observations.

Item Open Access Uncertainty in the Bifurcation Diagram of a Model of Heart Rhythm Dynamics(2014) Ring, CarolineTo understand the underlying mechanisms of cardiac arrhythmias, computational models are used to study heart rhythm dynamics. The parameters of these models carry inherent uncertainty. Therefore, to interpret the results of these models, uncertainty quantification (UQ) and sensitivity analysis (SA) are important. Polynomial chaos (PC) is a computationally efficient method for UQ and SA in which a model output Y, dependent on some independent uncertain parameters represented by a random vector ξ, is approximated as a spectral expansion in multidimensional orthogonal polynomials in ξ. The expansion can then be used to characterize the uncertainty in Y.

PC methods were applied to UQ and SA of the dynamics of a two-dimensional return-map model of cardiac action potential duration (APD) restitution in a paced single cell. Uncertainty was considered in four parameters of the model: three time constants and the pacing stimulus strength. The basic cycle length (BCL) (the period between stimuli) was treated as the control parameter. Model dynamics was characterized with bifurcation analysis, which determines the APD and stability of fixed points of the model at a range of BCLs, and the BCLs at which bifurcations occur. These quantities can be plotted in a bifurcation diagram, which summarizes the dynamics of the model. PC UQ and SA were performed for these quantities. UQ results were summarized in a novel probabilistic bifurcation diagram that visualizes the APD and stability of fixed points as uncertain quantities.

Classical PC methods assume that model outputs exist and reasonably smooth over the full domain of ξ. Because models of heart rhythm often exhibit bifurcations and discontinuities, their outputs may not obey the existence and smoothness assumptions on the full domain, but only on some subdomains which may be irregularly shaped. On these subdomains, the random variables representing the parameters may no longer be independent. PC methods therefore must be modified for analysis of these discontinuous quantities. The Rosenblatt transformation maps the variables on the subdomain onto a rectangular domain; the transformed variables are independent and uniformly distributed. A new numerical estimation of the Rosenblatt transformation was developed that improves accuracy and computational efficiency compared to existing kernel density estimation methods. PC representations of the outputs in the transformed variables were then constructed. Coefficients of the PC expansions were estimated using Bayesian inference methods. For discontinuous model outputs, SA was performed using a sampling-based variance-reduction method, with the PC estimation used as an efficient proxy for the full model.

To evaluate the accuracy of the PC methods, PC UQ and SA results were compared to large-sample Monte Carlo UQ and SA results. PC UQ and SA of the fixed point APDs, and of the probability that a stable fixed point existed at each BCL, was very close to MC UQ results for those quantities. However, PC UQ and SA of the bifurcation BCLs was less accurate compared to MC results.

The computational time required for PC and Monte Carlo methods was also compared. PC analysis (including Rosenblatt transformation and Bayesian inference) required less than 10 total hours of computational time, of which approximately 30 minutes was devoted to model evaluations, compared to approximately 65 hours required for Monte Carlo sampling of the model outputs at 1 × 106 ξ points.

PC methods provide a useful framework for efficient UQ and SA of the bifurcation diagram of a model of cardiac APD dynamics. Model outputs with bifurcations and discontinuities can be analyzed using modified PC methods. The methods applied and developed in this study may be extended to other models of heart rhythm dynamics. These methods have potential for use for uncertainty and sensitivity analysis in many applications of these models, including simulation studies of heart rate variability, cardiac pathologies, and interventions.

Item Open Access Uncertainty Quantification in Earth System Models Using Polynomial Chaos Expansions(2017) Li, GuotuThis work explores stochastic responses of various earth system models to different random sources, using polynomial chaos (PC) approaches. The following earth systems are considered, namely the HYbrid Coordinate Ocean Model (HYCOM, an ocean general circulation model (OGCM)) for the study of ocean circulation in the Gulf of Mexico (GoM); the Unified Wave INterface - Coupled Model (UWIN-CM, a dynamically coupled atmosphere-wave-ocean system) for Hurricane Earl (2010) modeling; and the earthquake seismology model for Bayesian inference of fault plane configurations.

In the OGCM study, we aim at analyzing the combined impact of uncertainties in initial conditions and wind forcing fields on ocean circulation using PC expansions. Empirical Orthogonal Functions (EOF) are used to represent both spatial perturbations of initial condition and space-time wind forcing fields, namely in the form of a superposition of modal components with uniformly distributed random amplitudes. The forward deterministic HYCOM simulations are used to propagate input uncertainties in ocean circulation in the GoM during the 2010 Deepwater Horizon (DWH) oil spill, and to generate a realization ensemble based on which PC surrogate models are constructed for both localized and field quantities of interest (QoIs), focusing specifically on Sea Surface Height (SSH) and Mixed Layer Depth (MLD). These PC surrogate models are constructed using Basis Pursuit DeNoising (BPDN) methodology, and their performance is assessed through various statistical measures. A global sensitivity analysis is then performed to quantify the impact of individual random sources as well as their interactions on ocean circulation. At the basin scale, SSH in the deep GoM is mostly sensitive to initial condition perturbations, while over the shelf it is sensitive to wind forcing perturbations. On the other hand, the basin MLD is almost exclusively sensitive to wind perturbations. For both quantities, the two random sources (initial condition and wind forcing) of uncertainties have limited interactions. Finally, computations indicate that whereas local quantities can exhibit complex behavior that necessitates a large number of realizations to build PC surrogate models, the modal analysis of field sensitivities can be suitably achieved with a moderate size ensemble.

It is noted that HYCOM simulations in the aforementioned OGCM study only focus on the ocean circulation, and ignore the oceanic feedback (e.g. momentum, energy, humidity etc) to the atmosphere. A more elaborated analysis is consequently performed to understand the atmosphere dynamics in a fully-coupled atmosphere-wave-ocean system. In particular, we explore the stochastic evolution of Hurricane Earl (2010) in response to uncertainties stemming from random perturbations in the storm's initial size, strength and rotational stretch. To this end, the UWIN-CM framework is employed as the forecasting system, which is used to propagate input uncertainties and generate a realization ensemble. PC surrogate models for time evolutions of both maximum wind speed and minimum sea level pressure (SLP) are constructed. These PC surrogates provide statistical insights on probability distributions of model responses throughout the simulation time span. Statistical analysis of rapid intensification (RI) process suggests that storms with enhanced initial intensity and counter-clockwise rotation perturbations are more likely to undergo a RI process. In addition, the RI process seems mostly sensitive to the mean wind strength and rotational stretch, rather than storm size and asymmetric wind amplitude perturbations. This is consistent with global sensitivity analysis of PC surrogate models. Finally we combine parametric storm perturbations with global stochastic kinetic energy backscatter (SKEBS) forcing in UWIN-CM simulations, and conclude that whereas the storm track is substantially influenced by global perturbations, it is weakly influenced by the properties of the initial storm.

The PC framework not only provides easy access to traditional statistical insights and global sensitivity indices, but also reduces the computational burden of sampling the system response, as performed for instance in Bayesian inference. These two advantages of PC approaches are well demonstrated in the study of earthquake seismology model response to random fault plane configurations. The PC statistical analysis suggests that the hypocenter location plays a dominant role in earthquake ground motion responses (in terms of peak ground velocities, PGVs), while elliptical patch properties only show secondary influence. In addition, Our PC based Bayesian analysis successfully identified the most `likely' fault plane configuration with respect to the chosen ground motion prediction equation (GMPE) curve, i.e. the hypocenter is more likely to be in the bottom right quadrant of the fault plane and the elliptical patch centers at the bottom left quadrant. To incorporate additional physical restrictions on fault plane configurations, a novel restricted sampling methodology is introduced. The results indicate that the restricted inference is more physically sensible, while retaining plausible consistency with findings from unrestricted inference.