# Browsing by Subject "math.PR"

Now showing 1 - 20 of 40

###### Results Per Page

###### Sort Options

Item Open Access A Large Deviation Approach to Posterior Consistency in Dynamical SystemsSu, Langxuan; Mukherjee, SayanIn this paper, we provide asymptotic results concerning (generalized) Bayesian inference for certain dynamical systems based on a large deviation approach. Given a sequence of observations $y$, a class of model processes parameterized by $\theta \in \Theta$ which can be characterized as a stochastic process $X^\theta$ or a measure $\mu_\theta$, and a loss function $L$ which measures the error between $y$ and a realization of $X^\theta$, we specify the generalized posterior distribution $\pi_t(\theta \mid y)$. The goal of this paper is to study the asymptotic behavior of $\pi_t(\theta \mid y)$ as $t \to \infty.$ In particular, we state conditions on the model family $\{\mu_\theta\}_{\theta \in \Theta}$ and the loss function $L$ such that the posterior distribution converges. The two conditions we require are: (1) a conditional large deviation behavior for a single $X^\theta$, and (2) an exponential continuity condition over the model family for the map from the parameter $\theta$ to the loss incurred between $X^\theta$ and the observation sequence $y$. The proposed framework is quite general, we apply it to two very different classes of dynamical systems: continuous time hypermixing processes and Gibbs processes on shifts of finite type. We also show that the generalized posterior distribution concentrates asymptotically on those parameters that minimize the expected loss and a divergence term, hence proving posterior consistency.Item Open Access A Merge-Split Proposal for Reversible Monte Carlo Markov Chain Sampling of Redistricting PlansCarter, Daniel; Hunter, Zach; Herschlag, Gregory; Mattingly, JonathanWe describe a Markov chain on redistricting plans that makes relatively global moves. The chain is designed to be usable as the proposal in a Markov Chain Monte Carlo (MCMC) algorithm. Sampling the space of plans amounts to dividing a graph into a partition with a specified number elements which each correspond to a different district. The partitions satisfy a collection of hard constraints and the measure may be weighted with regard to a number of other criteria. When these constraints and criteria are chosen to align well with classical legal redistricting criteria, the algorithm can be used to generate a collection of non-partisan, neutral plans. This collection of plans can serve as a baseline against which a particular plan of interest is compared. If a given plan has different racial or partisan qualities than what is typical of the collection plans, the given plan may have been gerrymandered and is labeled as an outlier.Item Open Access A stochastic version of Stein Variational Gradient Descent for efficient samplingLi, L; Li, Y; Liu, JG; Liu, Z; Lu, JWe propose in this work RBM-SVGD, a stochastic version of Stein Variational Gradient Descent (SVGD) method for efficiently sampling from a given probability measure and thus useful for Bayesian inference. The method is to apply the Random Batch Method (RBM) for interacting particle systems proposed by Jin et al to the interacting particle systems in SVGD. While keeping the behaviors of SVGD, it reduces the computational cost, especially when the interacting kernel has long range. Numerical examples verify the efficiency of this new version of SVGD.Item Open Access A Variation on the Donsker-Varadhan Inequality for the Principial Eigenvalue(2017-04-23) Lu, Jianfeng; Steinerberger, StefanThe purpose of this short note is to give a variation on the classical Donsker-Varadhan inequality, which bounds the first eigenvalue of a second-order elliptic operator on a bounded domain $\Omega$ by the largest mean first exit time of the associated drift-diffusion process via $$\lambda_1 \geq \frac{1}{\sup_{x \in \Omega} \mathbb{E}_x \tau_{\Omega^c}}.$$ Instead of looking at the mean of the first exist time, we study quantiles: let $d_{p, \partial \Omega}:\Omega \rightarrow \mathbb{R}_{\geq 0}$ be the smallest time $t$ such that the likelihood of exiting within that time is $p$, then $$\lambda_1 \geq \frac{\log{(1/p)}}{\sup_{x \in \Omega} d_{p,\partial \Omega}(x)}.$$ Moreover, as $p \rightarrow 0$, this lower bound converges to $\lambda_1$.Item Open Access Asymptotic behavior of branching diffusion processes in periodic mediaHebbar, P; Koralov, L; Nolen, JWe study the asymptotic behavior of branching diffusion processes in periodic media. For a super-critical branching process, we distinguish two types of behavior for the normalized number of particles in a bounded domain, depending on the distance of the domain from the region where the bulk of the particles is located. At distances that grow linearly in time, we observe intermittency (i.e., the $k$-th moment dominates the $k$-th power of the first moment for some $k$), while, at distances that grow sub-linearly in time, we show that all the moments converge. A key ingredient in our analysis is a sharp estimate of the transition kernel for the branching process, valid up to linear in time distances from the location of the initial particle.Item Open Access Asymptotic behavior of the Brownian frog modelBeckman, E; Dinan, E; Durrett, R; Huo, R; Junge, MWe introduce an extension of the frog model to Euclidean space and prove properties for the spread of active particles. The new geometry introduces a phase transition that does not occur for the frog model on the lattice. Fix $r>0$ and place a particle at each point $x$ of a unit intensity Poisson point process $\mathcal P \subseteq \mathbb R^d - \mathbb B(0,r)$. Around each point in $\mathcal{P}$, put a ball of radius $r$. A particle at the origin performs Brownian motion. When it hits the ball around $x$ for some $x \in \mathcal P$, new particles begin independent Brownian motions from the centers of the balls in the cluster containing $x$. Subsequent visits to the cluster do nothing. This waking process continues indefinitely. For $r$ smaller than the critical threshold of continuum percolation, we show that the set of activated sites in $\mathcal P$ behaves like a linearly expanding ball. Moreover, in any fixed ball the set of active particles converges to a unit intensity Poisson point process. Lastly, we prove that the model expands at rate at least $t^{2- \epsilon}$ when $d=2$ and $r$ equals the critical threshold in continuum percolation.Item Open Access Complexity of randomized algorithms for underdamped Langevin dynamicsCao, Y; Lu, J; Wang, LWe establish an information complexity lower bound of randomized algorithms for simulating underdamped Langevin dynamics. More specifically, we prove that the worst $L^2$ strong error is of order $\Omega(\sqrt{d}\, N^{-3/2})$, for solving a family of $d$-dimensional underdamped Langevin dynamics, by any randomized algorithm with only $N$ queries to $\nabla U$, the driving Brownian motion and its weighted integration, respectively. The lower bound we establish matches the upper bound for the randomized midpoint method recently proposed by Shen and Lee [NIPS 2019], in terms of both parameters $N$ and $d$.Item Open Access Convergence of Stratified MCMC Sampling of Non-Reversible DynamicsEarle, G; Mattingly, JCWe present a form of stratified MCMC algorithm built with non-reversible stochastic dynamics in mind. It can also be viewed as a generalization of the exact milestoning method, or form of NEUS. We prove convergence of the method under certain assumptions, with expressions for the convergence rate in terms of the process's behavior within each stratum and large scale behavior between strata. We show that the algorithm has a unique fixed point which corresponds to the invariant measure of the process without stratification. We will show how the speeds of two versions of the new algorithm, one with an extra eigenvalue problem step and one without, relate to the mixing rate of a discrete process on the strata, and the mixing probability of the process being sampled within each stratum. The eigenvalue problem version also relates to local and global perturbation results of discrete Markov chains, such as those given by Van Koten, Weare et. al.Item Open Access Coupling and Decoupling to bound an approximating Markov Chain(2017-07-27) Johndrow, JE; Mattingly, JCThis simple note lays out a few observations which are well known in many ways but may not have been said in quite this way before. The basic idea is that when comparing two different Markov chains it is useful to couple them is such a way that they agree as often as possible. We construct such a coupling and analyze it by a simple dominating chain which registers if the two processes agree or disagree. We find that this imagery is useful when thinking about such problems. We are particularly interested in comparing the invariant measures and long time averages of the processes. However, since the paths agree for long runs, it also provides estimates on various stopping times such as hitting or exit times. We also show that certain bounds are tight. Finally, we provide a simple application to a Markov Chain Monte Carlo algorithm and show numerically that the results of the paper show a good level of approximation at considerable speed up by using an approximating chain rather than the original sampling chain.Item Open Access Ergodicity and Lyapunov functions for Langevin dynamics with singular potentials(2017-11-30) Herzog, DP; Mattingly, JCWe study Langevin dynamics of $N$ particles on $R^d$ interacting through a singular repulsive potential, e.g.~the well-known Lennard-Jones type, and show that the system converges to the unique invariant Gibbs measure exponentially fast in a weighted total variation distance. The proof of the main result relies on an explicit construction of a Lyapunov function. In contrast to previous results for such systems, our result implies geometric convergence to equilibrium starting from an essentially optimal family of initial distributions.Item Open Access Error bounds for Approximations of Markov chains(2017-11-30) Johndrow, James E; Mattingly, Jonathan CThe first part of this article gives error bounds for approximations of Markov kernels under Foster-Lyapunov conditions. The basic idea is that when both the approximating kernel and the original kernel satisfy a Foster-Lyapunov condition, the long-time dynamics of the two chains -- as well as the invariant measures, when they exist -- will be close in a weighted total variation norm, provided that the approximation is sufficiently accurate. The required accuracy depends in part on the Lyapunov function, with more stable chains being more tolerant of approximation error. We are motivated by the recent growth in proposals for scaling Markov chain Monte Carlo algorithms to large datasets by defining an approximating kernel that is faster to sample from. Many of these proposals use only a small subset of the data points to construct the transition kernel, and we consider an application to this class of approximating kernel. We also consider applications to distribution approximations in Gibbs sampling. Another application in which approximating kernels are commonly used is in Metropolis algorithms for Gaussian process models common in spatial statistics and nonparametric regression. In this setting, there are typically two sources of approximation error: discretization error and approximation of Metropolis acceptance ratios. Because the approximating kernel is obtained by discretizing the state space, it is singular with respect to the exact kernel. To analyze this application, we give additional results in Wasserstein metrics in contrast to the proceeding examples which quantified the level of approximation in a total variation norm.Item Metadata only Fractional stochastic differential equations satisfying fluctuation-dissipation theorem(2017-04-23) Li, L; Liu, J-G; Lu, JianfengWe consider in this work stochastic differential equation (SDE) model for particles in contact with a heat bath when the memory effects are non-negligible. As a result of the fluctuation-dissipation theorem, the differential equations driven by fractional Brownian noise to model memory effects should be paired with Caputo derivatives and based on this we consider fractional stochastic differential equations (FSDEs), which should be understood in an integral form. We establish the existence of strong solutions for such equations. In the linear forcing regime, we compute the solutions explicitly and analyze the asymptotic behavior, through which we verify that satisfying fluctuation-dissipation indeed leads to the correct physical behavior. We further discuss possible extensions to nonlinear forcing regime, while leave the rigorous analysis for future works.Item Open Access Geometric ergodicity of Langevin dynamics with Coulomb interactionsLu, Y; Mattingly, JCThis paper is concerned with the long time behavior of Langevin dynamics of {\em Coulomb gases} in $\mathbf{R}^d$ with $d\geq 2$, that is a second order system of Brownian particles driven by an external force and a pairwise repulsive Coulomb force. We prove that the system converges exponentially to the unique Boltzmann-Gibbs invariant measure under a weighted total variation distance. The proof relies on a novel construction of Lyapunov function for the Coulomb system.Item Open Access Gibbsian dynamics and the generalized Langevin equationHerzog, DP; Mattingly, JC; Nguyen, HDWe study the statistically invariant structures of the nonlinear generalized Langevin equation (GLE) with a power-law memory kernel. For a broad class of memory kernels, including those in the subdiffusive regime, we construct solutions of the GLE using a Gibbsian framework, which does not rely on existing Markovian approximations. Moreover, we provide conditions on the decay of the memory to ensure uniqueness of statistically steady states, generalizing previous known results for the GLE under particular kernels as a sum of exponentials.Item Open Access Higher order asymptotics for large deviations -- Part IFernando, K; Hebbar, PFor sequences of non-lattice weakly dependent random variables, we obtain asymptotic expansions for Large Deviation Principles. These expansions, commonly referred to as strong large deviation results, are in the spirit of Edgeworth Expansions for the Central Limit Theorem. We apply our results to show that Diophantine iid sequences, finite state Markov chains, strongly ergodic Markov chains and Birkhoff sums of smooth expanding maps & subshifts of finite type satisfy these strong large deviation results.Item Open Access Higher order asymptotics for large deviations -- Part IIFernando, K; Hebbar, PWe obtain asymptotic expansions for the large deviation principle (LDP) for continuous time stochastic processes with weakly dependent increments. As a key example, we show that additive functionals of solutions of stochastic differential equations (SDEs) satisfying H\"ormander condition on a $d-$dimensional compact manifold admit these asymptotic expansions of all orders.Item Open Access Learning interacting particle systems: diffusion parameter estimation for aggregation equations(2018-02-14) Huang, H; Liu, JG; Lu, JIn this article, we study the parameter estimation of interacting particle systems subject to the Newtonian aggregation. Specifically, we construct an estimator $\widehat{\nu}$ with partial observed data to approximate the diffusion parameter $\nu$, and the estimation error is achieved. Furthermore, we extend this result to general aggregation equations with a bounded Lipschitz interaction field.Item Open Access Limiting Behaviors of High Dimensional Stochastic Spin EnsembleGao, Y; Kirkpatrick, K; Marzuola, J; Mattingly, J; Newhall, KALattice spin models in statistical physics are used to understand magnetism. Their Hamiltonians are a discrete form of a version of a Dirichlet energy, signifying a relationship to the Harmonic map heat flow equation. The Gibbs distribution, defined with this Hamiltonian, is used in the Metropolis-Hastings (M-H) algorithm to generate dynamics tending towards an equilibrium state. In the limiting situation when the inverse temperature is large, we establish the relationship between the discrete M-H dynamics and the continuous Harmonic map heat flow associated with the Hamiltonian. We show the convergence of the M-H dynamics to the Harmonic map heat flow equation in two steps: First, with fixed lattice size and proper choice of proposal size in one M-H step, the M-H dynamics acts as gradient descent and will be shown to converge to a system of Langevin stochastic differential equations (SDE). Second, with proper scaling of the inverse temperature in the Gibbs distribution and taking the lattice size to infinity, it will be shown that this SDE system converges to the deterministic Harmonic map heat flow equation. Our results are not unexpected, but show remarkable connections between the M-H steps and the SDE Stratonovich formulation, as well as reveal trajectory-wise out of equilibrium dynamics to be related to a canonical PDE system with geometric constraints.Item Open Access Moderate Deviation for Random Elliptic PDEs with Small Noise(2017-04-23) Li, X; Liu, J; Lu, J; Zhou, XPartial differential equations with random inputs have become popular models to characterize physical systems with uncertainty coming from, e.g., imprecise measurement and intrinsic randomness. In this paper, we perform asymptotic rare event analysis for such elliptic PDEs with random inputs. In particular, we consider the asymptotic regime that the noise level converges to zero suggesting that the system uncertainty is low, but does exists. We develop sharp approximations of the probability of a large class of rare events.Item Open Access Multi-Scale Merge-Split Markov Chain Monte Carlo for RedistrictingAutry, Eric A; Carter, Daniel; Herschlag, Gregory; Hunter, Zach; Mattingly, Jonathan CWe develop a Multi-Scale Merge-Split Markov chain on redistricting plans. The chain is designed to be usable as the proposal in a Markov Chain Monte Carlo (MCMC) algorithm. Sampling the space of plans amounts to dividing a graph into a partition with a specified number of elements which each correspond to a different district. The districts satisfy a collection of hard constraints and the measure may be weighted with regard to a number of other criteria. The multi-scale algorithm is similar to our previously developed Merge-Split proposal, however, this algorithm provides improved scaling properties and may also be used to preserve nested communities of interest such as counties and precincts. Both works use a proposal which extends the ReCom algorithm which leveraged spanning trees merge and split districts. In this work we extend the state space so that each district is defined by a hierarchy of trees. In this sense, the proposal step in both algorithms can be seen as a "Forest ReCom." We also expand the state space to include edges that link specified districts, which further improves the computational efficiency of our algorithm. The collection of plans sampled by the MCMC algorithm can serve as a baseline against which a particular plan of interest is compared. If a given plan has different racial or partisan qualities than what is typical of the collection of plans, the given plan may have been gerrymandered and is labeled as an outlier.