Bayesian and Information-Theoretic Learning of High Dimensional Data

Loading...
Thumbnail Image

Date

2012

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

1083
views
1359
downloads

Abstract

The concept of sparseness is harnessed to learn a low dimensional representation of high dimensional data. This sparseness assumption is exploited in multiple ways. In the Bayesian Elastic Net, a small number of correlated features are identified for the response variable. In the sparse Factor Analysis for biomarker trajectories, the high dimensional gene expression data is reduced to a small number of latent factors, each with a prototypical dynamic trajectory. In the Bayesian Graphical LASSO, the inverse covariance matrix of the data distribution is assumed to be sparse, inducing a sparsely connected Gaussian graph. In the nonparametric Mixture of Factor Analyzers, the covariance matrices in the Gaussian Mixture Model are forced to be low-rank, which is closely related to the concept of block sparsity.

Finally in the information-theoretic projection design, a linear projection matrix is explicitly sought for information-preserving dimensionality reduction. All the methods mentioned above prove to be effective in learning both simulated and real high dimensional datasets.

Description

Provenance

Citation

Citation

Chen, Minhua (2012). Bayesian and Information-Theoretic Learning of High Dimensional Data. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/5588.

Collections


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.