Towards Better Representations with Deep/Bayesian Learning
Deep learning and Bayesian Learning are two popular research topics in machine learning. They provide the flexible representations in the complementary manner. Therefore, it is desirable to take the best from both fields. This thesis focuses on the intersection of the two topics— enriching one with each other. Two new research topics are inspired: Bayesian deep learning and Deep Bayesian learning.
In Bayesian deep learning, scalable Bayesian methods are proposed to learn the weight uncertainty of deep neural networks (DNNs). On this topic, I propose the preconditioned stochastic gradient MCMC methods, then show its connection to Dropout, and its applications to modern network architectures in computer vision and natural language processing.
In Deep Bayesian learning: DNNs are employed as powerful representations of conditionals in traditional Bayesian models. I will focus on understanding the recent adversarial learning methods for joint distribution matching, through which several recent bivariate adversarial models are unified. It further raises the non-identifiability issues in bidirectional adversarial learning, and propose ALICE algorithms: a conditional entropy framework to remedy the issues. The derived algorithms show significant improvement in the tasks of image generation and translation, by solving the non-identifiability issues.
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Rights for Collection: Duke Dissertations
Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info