Bayesian and Information-Theoretic Learning of High Dimensional Data

dc.contributor.advisor

Carin, Lawrence

dc.contributor.author

Chen, Minhua

dc.date.accessioned

2012-05-25T20:21:02Z

dc.date.available

2012-05-25T20:21:02Z

dc.date.issued

2012

dc.department

Electrical and Computer Engineering

dc.description.abstract

The concept of sparseness is harnessed to learn a low dimensional representation of high dimensional data. This sparseness assumption is exploited in multiple ways. In the Bayesian Elastic Net, a small number of correlated features are identified for the response variable. In the sparse Factor Analysis for biomarker trajectories, the high dimensional gene expression data is reduced to a small number of latent factors, each with a prototypical dynamic trajectory. In the Bayesian Graphical LASSO, the inverse covariance matrix of the data distribution is assumed to be sparse, inducing a sparsely connected Gaussian graph. In the nonparametric Mixture of Factor Analyzers, the covariance matrices in the Gaussian Mixture Model are forced to be low-rank, which is closely related to the concept of block sparsity.

Finally in the information-theoretic projection design, a linear projection matrix is explicitly sought for information-preserving dimensionality reduction. All the methods mentioned above prove to be effective in learning both simulated and real high dimensional datasets.

dc.identifier.uri

https://hdl.handle.net/10161/5588

dc.subject

Electrical engineering

dc.subject

Statistics

dc.subject

Computer science

dc.subject

Bayesian statistics

dc.subject

High Dimensional Data Analysis

dc.subject

Information-Theoretic Learning

dc.subject

Machine learning

dc.subject

Signal processing

dc.subject

Sparseness

dc.title

Bayesian and Information-Theoretic Learning of High Dimensional Data

dc.type

Dissertation

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Chen_duke_0066D_11405.pdf
Size:
5.3 MB
Format:
Adobe Portable Document Format

Collections