Bayesian Learning with Dependency Structures via Latent Factors, Mixtures, and Copulas

Loading...
Thumbnail Image

Date

2016

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

315
views
617
downloads

Abstract

Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.

Description

Provenance

Citation

Citation

Han, Shaobo (2016). Bayesian Learning with Dependency Structures via Latent Factors, Mixtures, and Copulas. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/12828.

Collections


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.