Browsing by Author "Jiang, Sheng"
Results Per Page
Sort Options
Item Open Access Bayesian Models for Causal Analysis with Many Potentially Weak Instruments(2015) Jiang, ShengThis paper investigates Bayesian instrumental variable models with many instruments. The number of instrumental variables grows with the sample size and is allowed to be much larger than the sample size. With some sparsity condition on the coefficients on the instruments, we characterize a general prior specification where the posterior consistency of the parameters is established and calculate the corresponding convergence rate.
In particular, we show the posterior consistency for a class of spike and slab priors on the many potentially weak instruments. The spike and slab prior shrinks the number of instrumental variables, which avoids overfitting and provides uncertainty quantifications on the first stage. A simulation study is conducted to illustrate the convergence notion and estimation/selection performance under dependent instruments. Computational issues related to the Gibbs sampler are also discussed.
Item Open Access Consistency and Adaptation of Gaussian Process Regression, Bayesian Stochastic Block Model and Tail Index(2021) Jiang, ShengBayesian methods offer adaptive inference via hierarchical extensions and uncertaintyquantification automatically with corresponding posterior distribution. Frequentist evaluation of Bayesian methods becomes a fundamental and necessary step in Bayesian analysis.
Bayesian nonparametric regression under a rescaled Gaussian process prior offers smoothness-adaptive function estimation with near minimax-optimal error rates. Hierarchical extensions of this approach, equipped with stochastic variable selection, are known to also adapt to the unknown intrinsic dimension of a sparse true regression function. But it remains unclear if such extensions offer variable selection consistency, i.e., if the true subset of important variables could be consistently learned from the data. It is shown here that variable consistency may indeed be achieved with such models at least when the true regression function has finite smoothness to induce a polynomially larger penalty on inclusion of false positive predictors. Our result covers the high dimensional asymptotic setting where the predictor dimension is allowed to grow with the sample size.
Stochastic Block Models (SBMs) are a fundamental tool for community detection in network analysis. But little theoretical work exists on the statistical performance of Bayesian SBMs, especially when the number of communities is unknown. This project studies weakly assortative SBMs whose members of the same community are more likely to connect with one another than with members from other communities. The weak assortativity constraint is embedded within an otherwise weak prior, and, under mild regularity conditions, the resulting posterior distribution is shown to concentrate on the true number of communities and membership allocation as the network size grows to infinity. A reversible-jump Markov Chain Monte Carlo posterior computation strategy is developed by adapting the allocation sampler. Finite sample properties are examined via simulation studies in which the proposed method offers competitive estimation accuracy relative to existing methods under a variety of challenging scenarios.
Tail index estimation has been well studied in the frequentist literature. However, few asymptotic studies on Bayesian tail index estimation are available. This paper works with a transformation based semi-parametric density model by non-parametrically transforming a parametric CDF. The semiparametric density model offers both accurate density estimation and tail index estimation. Compared with frequentist methods, it avoids choosing a high quantile to threshold the data. We provide sufficient conditions on the parametric family and the logistic Gaussian process priors, such that posterior contraction rate of tail index can be established. Limitations of the semiparametric density model are also discussed.