Error bounds for Approximations of Markov chains

Loading...
Thumbnail Image

Date

2017-11-30

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

152
views
86
downloads

Abstract

The first part of this article gives error bounds for approximations of Markov kernels under Foster-Lyapunov conditions. The basic idea is that when both the approximating kernel and the original kernel satisfy a Foster-Lyapunov condition, the long-time dynamics of the two chains -- as well as the invariant measures, when they exist -- will be close in a weighted total variation norm, provided that the approximation is sufficiently accurate. The required accuracy depends in part on the Lyapunov function, with more stable chains being more tolerant of approximation error. We are motivated by the recent growth in proposals for scaling Markov chain Monte Carlo algorithms to large datasets by defining an approximating kernel that is faster to sample from. Many of these proposals use only a small subset of the data points to construct the transition kernel, and we consider an application to this class of approximating kernel. We also consider applications to distribution approximations in Gibbs sampling. Another application in which approximating kernels are commonly used is in Metropolis algorithms for Gaussian process models common in spatial statistics and nonparametric regression. In this setting, there are typically two sources of approximation error: discretization error and approximation of Metropolis acceptance ratios. Because the approximating kernel is obtained by discretizing the state space, it is singular with respect to the exact kernel. To analyze this application, we give additional results in Wasserstein metrics in contrast to the proceeding examples which quantified the level of approximation in a total variation norm.

Department

Description

Provenance

Citation

Scholars@Duke

Mattingly

Jonathan Christopher Mattingly

Kimberly J. Jenkins Distinguished University Professor of New Technologies

Jonathan Christopher  Mattingly grew up in Charlotte, NC where he attended Irwin Ave elementary and Charlotte Country Day.  He graduated from the NC School of Science and Mathematics and received a BS is Applied Mathematics with a concentration in physics from Yale University. After two years abroad with a year spent at ENS Lyon studying nonlinear and statistical physics on a Rotary Fellowship, he returned to the US to attend Princeton University where he obtained a PhD in Applied and Computational Mathematics in 1998. After 4 years as a Szego assistant professor at Stanford University and a year as a member of the IAS in Princeton, he moved to Duke in 2003. He is currently a Professor of Mathematics and of Statistical Science.

His expertise is in the longtime behavior of stochastic system including randomly forced fluid dynamics, turbulence, stochastic algorithms used in molecular dynamics and Bayesian sampling, and stochasticity in biochemical networks.

Since 2013 he has also been working to understand and quantify gerrymandering and its interaction of a region's geopolitical landscape. This has lead him to testify in a number of court cases including in North Carolina, which led to the NC congressional and both NC legislative maps being deemed unconstitutional and replaced for the 2020 elections. 

He is the recipient of a Sloan Fellowship and a PECASE CAREER award.  He is also a fellow of the IMS and the AMS. He was awarded the Defender of Freedom award by  Common Cause for his work on Quantifying Gerrymandering.



Unless otherwise indicated, scholarly articles published by Duke faculty members are made available here with a CC-BY-NC (Creative Commons Attribution Non-Commercial) license, as enabled by the Duke Open Access Policy. If you wish to use the materials in ways not already permitted under CC-BY-NC, please consult the copyright owner. Other materials are made available here through the author’s grant of a non-exclusive license to make their work openly accessible.