Browsing by Subject "Variational autoencoder"
- Results Per Page
- Sort Options
Item Open Access Deep Generative Models for Image Representation Learning(2018) Pu, YunchenRecently there has been increasing interest in developing generative models of data, offering the promise of learning based on the often vast quantity of unlabeled data. With such learning, one typically seeks to build rich, hierarchical probabilistic models that are able to
fit to the distribution of complex real data, and are also capable of realistic data synthesis. In this dissertation, novel models and learning algorithms are proposed for deep generative models.
This disseration consists of three main parts.
The first part developed a deep generative model joint analysis of images and associated labels or captions. The model is efficiently learned using variational autoencoder. A multilayered (deep) convolutional dictionary representation is employed as a decoder of the
latent image features. Stochastic unpooling is employed to link consecutive layers in the image model, yielding top-down image generation. A deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone. Excellent results are obtained on several benchmark datasets, including ImageNet, demonstrating that the proposed model achieves results that are highly competitive with similarly sized convolutional neural networks.
The second part developed a new method for learning variational autoencoders (VAEs), based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets.
The third part developed a new form of variational autoencoder, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple
prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of
joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.
Item Open Access Joint Data Modeling Using Variational Autoencoders(2022) Kumar, AchintNervous systems are macroscopic nonequilibrium physical systems that produce intricate behaviors that remain difficult to analyze, classify, and understand. In my thesis, I develop and analyze a statistical technique based on machine learning thatattempts to improve upon previous efforts to analyze behavioral data by being multimodal, which means combining information from different kinds of observables to provide a better insight than what any one observable can provide. Many modern experiments have simultaneously recorded data from multiple sources (e.g., audio, video, neural data). It is of great interest to learn the relationship between different data sources. Multimodal datasets present a challenge for latent variable models as they must learn to capture not only the variance present within each data type, but also the relationships among types. Typically, this is done by training a collection of unimodal experts, the outputs of which are aggregated in a shared latent space. Here, building on recent developments in identifiable variational autoencoders (VAEs), I propose a new joint analysis method, the product of identifiable sufficient experts (POISE-VAE), which posits a latent representation unique to each modality, with latent spaces interacting via an undirected graphical model. This model guarantees identifiability of the latent spaces without the need for additional covariates, and given a simple yet flexible class of approximate posteriors, can be trained by maximizing an evidence lower bound approximated by Gibbs sampling. I show comparable performance to existing methods on a variety of toy and benchmark datasets in generating realistic samples, with applications to the simultaneous modeling of brain calcium imaging data and behavior. Then, I use the VAE framework to investigate the vocalization of hearing and deaf mice during courtship. It is of great interest to figure out if auditory feedback affects the vocalization produced by hearing and deaf mice. I use the low dimensional representation of data learnt by VAE to compare vocalizations produced in the two cases. My statistical analysis based on maximum mean discrepancy(MMD) yields no statistical difference in vocalization produced by the two groups. I conclude with a discussion on possible extensions of the model.