Show simple item record

dc.contributor.advisor Carin, Lawrence en_US
dc.contributor.author Chen, Minhua en_US
dc.date.accessioned 2012-05-25T20:21:02Z
dc.date.available 2012-05-25T20:21:02Z
dc.date.issued 2012 en_US
dc.identifier.uri http://hdl.handle.net/10161/5588
dc.description Dissertation en_US
dc.description.abstract <p>The concept of sparseness is harnessed to learn a low dimensional representation of high dimensional data. This sparseness assumption is exploited in multiple ways. In the Bayesian Elastic Net, a small number of correlated features are identified for the response variable. In the sparse Factor Analysis for biomarker trajectories, the high dimensional gene expression data is reduced to a small number of latent factors, each with a prototypical dynamic trajectory. In the Bayesian Graphical LASSO, the inverse covariance matrix of the data distribution is assumed to be sparse, inducing a sparsely connected Gaussian graph. In the nonparametric Mixture of Factor Analyzers, the covariance matrices in the Gaussian Mixture Model are forced to be low-rank, which is closely related to the concept of block sparsity. </p><p>Finally in the information-theoretic projection design, a linear projection matrix is explicitly sought for information-preserving dimensionality reduction. All the methods mentioned above prove to be effective in learning both simulated and real high dimensional datasets.</p> en_US
dc.subject Electrical engineering en_US
dc.subject Statistics en_US
dc.subject Computer science en_US
dc.subject Bayesian Statistics en_US
dc.subject High Dimensional Data Analysis en_US
dc.subject Information-Theoretic Learning en_US
dc.subject Machine Learning en_US
dc.subject Signal Processing en_US
dc.subject Sparseness en_US
dc.title Bayesian and Information-Theoretic Learning of High Dimensional Data en_US
dc.type Dissertation en_US
dc.department Electrical and Computer Engineering en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record