# Browsing by Subject "uncertainty quantification"

###### Results Per Page

###### Sort Options

Item Open Access Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification(2015) Winokur, Justin GregoryPolynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.

Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.

We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.

In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error.

In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.

Item Open Access Influence of Material Properties and Fracture Properties on the Crack Nucleation and Growth(2021) Zeng, BoIn this thesis, we studied the influence of spatial variations in the fracture property and the elastic property on the resulting crack patterns during soil desiccation. Young's modulus is selected as the representative elastic property and the fracture toughness is selected as that for the fracture property. Their well-defined spatially fluctuated random fields are the input of the phase-field fracture simulation, and the resulting damage field is the output. Various postprocessing of the damage field were carried out to analyze the resulting fields. After comparing the morphology of the cracks and fragment size distributions, a preliminary guess was that the two inputs have very close influence on the output. Then the Pearson correlation coefficient, as a first try of sensitivity analysis, also gave an indistinguishable correlation number between the two. A more rigorous approach with highly isolated sensitivity quantity was needed, which brought us to the Sobol' indice based on polynomial chaos expansion, a global sensitivity analysis measure which accounts for the variation of output into the variation of each input and any combination of input.

Item Open Access Multimodal Probabilistic Inference for Robust Uncertainty Quantification(2021) Jerfel, GhassenDeep learning models, which form the backbone of modern ML systems, generalize poorly to small changes to the data distribution. They are also bad at signalling failure, making predictions with high confidence when their training data or fragile assumptions make them unlikely to make reasonable decisions. This lack of robustness makes it difficult to trust their use in safety-critical settings. Accordingly, there is a pressing need to equip models with a notion of uncertainty to understand their failure modes and detect when their decisions cannot be used or require intervention. Uncertainty quantification is thus crucial for ML systems to work consistently on real-world data and fail loudly when they don’t.One growing line of research on uncertainty quantification is probabilistic modelling which is concerned with capturing model uncertainty by placing a distribution over the models which can be marginalized at test-time. This is especially useful in underspecified models which can have diverse near-optimal solutions, at training time, with similar population-level performance. However, probabilistic modelling approaches such as Bayesian neural networks (BNN) do not scale well in terms of memory and runtime and often underperform simple deterministic baselines in terms of accuracy. Furthermore, BNNs underperform deep ensembles as they fail to explore multiple modes, in the loss space, while being effective at capturing uncertainty within a single mode.

In this thesis, we develop multimodal representations of model uncertainty that can capture a diverse set of hypotheses. We first propose a scalable family of BNN priors (and corresponding approximate posteriors) that combine the local (i.e. within-mode) uncertainty with mode averaging to deliver robust and calibrated uncertainty estimates in addition to improving accuracy both in and out of distribution. We then leverage a multimodal representation of uncertainty to modulate the amount of information transfer between tasks in meta-learning. Our proposed framework integrates Bayesian non-parametric mixtures with deep learning to enable NNs to adapt their capacity as more data is observed which is crucial for lifelong learning. Finally, we propose to replace the reverse Kullback-Leibler divergence (RKL), known for its mode-seeking behavior and for underestimating posterior covariance, with the forward KL (FKL) divergence in a theoretically-guided novel inference procedure. This ensures the efficient combination of variational boosting with adaptive importance sampling. The proposed algorithm offers a well-defined compute-accuracy trade-off and is guaranteed to converge to the optimal multimodal variational solution as well as the optimal importance sampling proposal distribution.

Item Open Access Stochastic Modeling of Parametric and Model-Form Uncertainties in Computational Mechanics: Applications to Multiscale and Multimodel Predictive Frameworks(2023) Zhang, HaoUncertainty quantification (UQ) plays a critical role in computational science and engineering. The representation of uncertainties stands at the core of UQ frameworks and encompasses the modeling of parametric uncertainties --- which are uncertainties affecting parameters in a well-known model --- and model-form uncertainties --- which are uncertainties defined at the operator level. Past contributions in the field have primarily focused on parametric uncertainties in idealized environments involving simple state spaces and index sets. On the other hand, the consideration of model-form uncertainties (beyond model error correction) is still in its infancy. In this context, this dissertation aims to develop stochastic modeling approaches to represent these two forms of uncertainties in multiscale and multimodel settings.

The case of spatially-varying geometrical perturbations on nonregular index sets is first addressed. We propose an information-theoretic formulation where a push-forward map is used to induce bounded variations and the latent Gaussian random field is implicitly defined through a stochastic partial differential equation on the manifold defining the surface of interest. Applications to a gyroid structure and patient-specific brain interfaces are presented. We then address operator learning in a multiscale setting where we propose a data-free training method, applied to Fourier neural operators. We investigate the homogenization of random media defined at microscopic and mesoscopic scales. Next, we develop a Riemannian probabilistic framework to capture operator-form uncertainties in the multimodel setting (i.e., when a family of model candidates is available). The proposed methodology combines a proper-orthogonal-decomposition reduced-order model with Riemannian operators ensuring admissibility in the almost sure sense. The framework exhibits several key advantages, including the ability to generate a model ensemble within the convex hull defined by model proposals and to constrain the mean in the Fréchet sense, as well as ease of implementation. The method is deployed to investigate model-form uncertainties in various molecular dynamics simulations on graphene sheets. We finally propose an extension of this framework to systems described by coupled partial differential equations, with emphasis on the phase-field approach to brittle fracture.

Item Open Access Uncertainty in the Bifurcation Diagram of a Model of Heart Rhythm Dynamics(2014) Ring, CarolineTo understand the underlying mechanisms of cardiac arrhythmias, computational models are used to study heart rhythm dynamics. The parameters of these models carry inherent uncertainty. Therefore, to interpret the results of these models, uncertainty quantification (UQ) and sensitivity analysis (SA) are important. Polynomial chaos (PC) is a computationally efficient method for UQ and SA in which a model output Y, dependent on some independent uncertain parameters represented by a random vector ξ, is approximated as a spectral expansion in multidimensional orthogonal polynomials in ξ. The expansion can then be used to characterize the uncertainty in Y.

PC methods were applied to UQ and SA of the dynamics of a two-dimensional return-map model of cardiac action potential duration (APD) restitution in a paced single cell. Uncertainty was considered in four parameters of the model: three time constants and the pacing stimulus strength. The basic cycle length (BCL) (the period between stimuli) was treated as the control parameter. Model dynamics was characterized with bifurcation analysis, which determines the APD and stability of fixed points of the model at a range of BCLs, and the BCLs at which bifurcations occur. These quantities can be plotted in a bifurcation diagram, which summarizes the dynamics of the model. PC UQ and SA were performed for these quantities. UQ results were summarized in a novel probabilistic bifurcation diagram that visualizes the APD and stability of fixed points as uncertain quantities.

Classical PC methods assume that model outputs exist and reasonably smooth over the full domain of ξ. Because models of heart rhythm often exhibit bifurcations and discontinuities, their outputs may not obey the existence and smoothness assumptions on the full domain, but only on some subdomains which may be irregularly shaped. On these subdomains, the random variables representing the parameters may no longer be independent. PC methods therefore must be modified for analysis of these discontinuous quantities. The Rosenblatt transformation maps the variables on the subdomain onto a rectangular domain; the transformed variables are independent and uniformly distributed. A new numerical estimation of the Rosenblatt transformation was developed that improves accuracy and computational efficiency compared to existing kernel density estimation methods. PC representations of the outputs in the transformed variables were then constructed. Coefficients of the PC expansions were estimated using Bayesian inference methods. For discontinuous model outputs, SA was performed using a sampling-based variance-reduction method, with the PC estimation used as an efficient proxy for the full model.

To evaluate the accuracy of the PC methods, PC UQ and SA results were compared to large-sample Monte Carlo UQ and SA results. PC UQ and SA of the fixed point APDs, and of the probability that a stable fixed point existed at each BCL, was very close to MC UQ results for those quantities. However, PC UQ and SA of the bifurcation BCLs was less accurate compared to MC results.

The computational time required for PC and Monte Carlo methods was also compared. PC analysis (including Rosenblatt transformation and Bayesian inference) required less than 10 total hours of computational time, of which approximately 30 minutes was devoted to model evaluations, compared to approximately 65 hours required for Monte Carlo sampling of the model outputs at 1 × 106 ξ points.

PC methods provide a useful framework for efficient UQ and SA of the bifurcation diagram of a model of cardiac APD dynamics. Model outputs with bifurcations and discontinuities can be analyzed using modified PC methods. The methods applied and developed in this study may be extended to other models of heart rhythm dynamics. These methods have potential for use for uncertainty and sensitivity analysis in many applications of these models, including simulation studies of heart rate variability, cardiac pathologies, and interventions.