Browsing by Author "Carin, L"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Open Access Adaptive temporal compressive sensing for video(2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings, 2013-12-01) Yuan, X; Yang, J; Llull, P; Liao, X; Sapiro, G; Brady, DJ; Carin, LThis paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.Item Open Access Augment-and-conquer negative binomial processes(Advances in Neural Information Processing Systems, 2012-12-01) Zhou, M; Carin, LBy developing data augmentation methods unique to the negative binomial (NB) distribution, we unite seemingly disjoint count and mixture models under the NB process framework. We develop fundamental properties of the models and derive efficient Gibbs sampling inference. We show that the gamma-NB process can be reduced to the hierarchical Dirichlet process with normalization, highlighting its unique theoretical, structural and computational advantages. A variety of NB processes with distinct sharing mechanisms are constructed and applied to topic modeling, with connections to existing algorithms, showing the importance of inferring both the NB dispersion and probability parameters.Item Open Access Beta-negative binomial process and poisson factor analysis(Journal of Machine Learning Research, 2012-01-01) Zhou, M; Hannah, LA; Dunson, DB; Carin, L© Copyright 2012 by the authors. A beta-negative binomial (BNB) process is proposed, leading to a beta-gamma-Poisson process, which may be viewed as a "multiscoop" generalization of the beta-Bernoulli process. The BNB process is augmented into a beta-gamma-gamma-Poisson hierarchical structure, and applied as a nonparametric Bayesian prior for an infinite Poisson factor analysis model. A finite approximation for the beta process Lévy random measure is constructed for convenient implementation. Efficient MCMC computations are performed with data augmentation and marginalization techniques. Encouraging results are shown on document count matrix factorization.Item Open Access Communications inspired linear discriminant analysis(Proceedings of the 29th International Conference on Machine Learning, ICML 2012, 2012-10-10) Chen, M; Carson, W; Rodrigues, M; Calderbank, R; Carin, LWe study the problem of supervised linear dimensionality reduction, taking an information-theoretic viewpoint. The linear projection matrix is designed by maximizing the mutual information between the projected signal and the class label. By harnessing a recent theoretical result on the gradient of mutual information, the above optimization problem can be solved directly using gradient descent, without requiring simplification of the objective function. Theoretical analysis and empirical comparison are made between the proposed method and two closely related methods, and comparisons are also made with a method in which Rényi entropy is used to define the mutual information (in this case the gradient may be computed simply, under a special parameter setting). Relative to these alternative approaches, the proposed method achieves promising results on real datasets. Copyright 2012 by the author(s)/owner(s).Item Open Access Communications-inspired projection design with application to compressive sensing(SIAM Journal on Imaging Sciences, 2012-12-01) Carson, WR; Chen, M; Rodrigues, MRD; Calderbank, R; Carin, LWe consider the recovery of an underlying signal x ∈ ℂm based on projection measurements of the form y = Mx+w, where y ∈ ℂℓ and w is measurement noise; we are interested in the case ℓ ≪ m. It is assumed that the signal model p(x) is known and that w ~ CN(w; 0,Σw) for known Σ w. The objective is to design a projection matrix M ∈ ℂℓ×m to maximize key information-theoretic quantities with operational significance, including the mutual information between the signal and the projections I(x; y) or the Rényi entropy of the projections hα (y) (Shannon entropy is a special case). By capitalizing on explicit characterizations of the gradients of the information measures with respect to the projection matrix, where we also partially extend the well-known results of Palomar and Verdu ́ from the mutual information to the Rényi entropy domain, we reveal the key operations carried out by the optimal projection designs: mode exposure and mode alignment. Experiments are considered for the case of compressive sensing (CS) applied to imagery. In this context, we provide a demonstration of the performance improvement possible through the application of the novel projection designs in relation to conventional ones, as well as justification for a fast online projection design method with which state-of-the-art adaptive CS signal recovery is achieved. © 2012 Society for Industrial and Applied Mathematics.Item Open Access Cross-Domain Multitask Learning with Latent Probit ModelsHan, S; Liao, X; Carin, LLearning multiple tasks across heterogeneous domains is a challenging problem since the feature space may not be the same for different tasks. We assume the data in multiple tasks are generated from a latent common domain via sparse domain transforms and propose a latent probit model (LPM) to jointly learn the domain transforms, and the shared probit classifier in the common domain. To learn meaningful task relatedness and avoid over-fitting in classification, we introduce sparsity in the domain transforms matrices, as well as in the common classifier. We derive theoretical bounds for the estimation error of the classifier in terms of the sparsity of domain transforms. An expectation-maximization algorithm is derived for learning the LPM. The effectiveness of the approach is demonstrated on several real datasets.Item Open Access Dynamic nonparametric bayesian models for analysis of music(Journal of the American Statistical Association, 2010-06-01) Ren, L; Dunson, D; Lindroth, S; Carin, LThe dynamic hierarchical Dirichlet process (dHDP) is developed to model complex sequential data, with a focus on audio signals from music. The music is represented in terms of a sequence of discrete observations, and the sequence is modeled using a hidden Markov model (HMM) with time-evolving parameters. The dHDP imposes the belief that observations that are temporally proximate are more likely to be drawn from HMMs with similar parameters, while also allowing for "innovation" associated with abrupt changes in the music texture. The sharing mechanisms of the time-evolving model are derived, and for inference a relatively simple Markov chain Monte Carlo sampler is developed. Segmentation of a given musical piece is constituted via the model inference. Detailed examples are presented on several pieces, with comparisons to other models. The dHDP results are also compared with a conventional music-theoretic analysis. All the supplemental materials used by this paper are available online. © 2010 American Statistical Association.Item Open Access Generalized Bregman Divergence and Gradient of Mutual Information for Vector Poisson Channels(2013 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY PROCEEDINGS (ISIT), 2013) Wang, L; Rodrigues, M; Carin, LItem Open Access Inferring Latent Structure From Mixed Real and Categorical Relational DataSalazar, E; Cain, MS; Darling, EF; Mitroff, SR; Carin, LWe consider analysis of relational data (a matrix), in which the rows correspond to subjects (e.g., people) and the columns correspond to attributes. The elements of the matrix may be a mix of real and categorical. Each subject and attribute is characterized by a latent binary feature vector, and an inferred matrix maps each row-column pair of binary feature vectors to an observed matrix element. The latent binary features of the rows are modeled via a multivariate Gaussian distribution with low-rank covariance matrix, and the Gaussian random variables are mapped to latent binary features via a probit link. The same type construction is applied jointly to the columns. The model infers latent, low-dimensional binary features associated with each row and each column, as well correlation structure between all rows and between all columns.Item Open Access Latent protein trees(Annals of Applied Statistics, 2013-06-01) Henao, R; Thompson, JW; Moseley, MA; Ginsburg, GS; Carin, L; Lucas, JEUnbiased, label-free proteomics is becoming a powerful technique for measuring protein expression in almost any biological sample. The output of these measurements after preprocessing is a collection of features and their associated intensities for each sample. Subsets of features within the data are from the same peptide, subsets of peptides are from the same protein, and subsets of proteins are in the same biological pathways, therefore, there is the potential for very complex and informative correlational structure inherent in these data. Recent attempts to utilize this data often focus on the identification of single features that are associated with a particular phenotype that is relevant to the experiment. However, to date, there have been no published approaches that directly model what we know to be multiple different levels of correlation structure. Here we present a hierarchical Bayesian model which is specifically designed to model such correlation structure in unbiased, label-free proteomics. This model utilizes partial identification information from peptide sequencing and database lookup as well as the observed correlation in the data to appropriately compress features into latent proteins and to estimate their correlation structure. We demonstrate the effectiveness of the model using artificial/benchmark data and in the context of a series of proteomics measurements of blood plasma from a collection of volunteers who were infected with two different strains of viral influenza. © Institute of Mathematical Statistics, 2013.Item Unknown Learning a hybrid architecture for sequence regression and annotation(30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016-01-01) Zhang, Y; Henao, R; Carin, L; Zhong, J; Hartemink, AJ© Copyright 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.When learning a hidden Markov model (HMM), sequential observations can often be complemented by real-valued summary response variables generated from the path of hidden states. Such settings arise in numerous domains, including many applications in biology, like motif discovery and genome annotation. In this paper, we present a flexible framework for jointly modeling both latent sequence features and the functional mapping that relates the summary response variables to the hidden state sequence. The algorithm is compatible with a rich set of mapping functions. Results show that the availability of additional continuous response variables can simultaneously improve the annotation of the sequential observations and yield good prediction performance in both synthetic data and real-world datasets.Item Unknown Lognormal and gamma mixed negative binomial regression(Proceedings of the 29th International Conference on Machine Learning, ICML 2012, 2012-10-10) Zhou, M; Li, L; Dunson, D; Carin, LIn regression analysis of counts, a lack of simple and efficient algorithms for posterior computation has made Bayesian approaches appear unattractive and thus underdeveloped. We propose a lognormal and gamma mixed negative binomial (NB) regression model for counts, and present efficient closed-form Bayesian inference; unlike conventional Poisson models, the proposed approach has two free parameters to include two different kinds of random effects, and allows the incorporation of prior information, such as sparsity in the regression coefficients. By placing a gamma distribution prior on the NB dispersion parameter r, and connecting a log-normal distribution prior with the logit of the NB probability parameter p, efficient Gibbs sampling and variational Bayes inference are both developed. The closed-form updates are obtained by exploiting conditional conjugacy via both a compound Poisson representation and a Polya-Gamma distribution based data augmentation approach. The proposed Bayesian inference can be implemented routinely, while being easily generalizable to more complex settings involving multivariate dependence structures. The algorithms are illustrated using real examples. Copyright 2012 by the author(s)/owner(s).Item Unknown NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing.(CoRR, 2018) Shen, D; Su, Q; Chapfuwa, P; Wang, W; Wang, G; Carin, L; Henao, RItem Unknown Nested dictionary learning for hierarchical organization of imagery and text(Uncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012, 2012-12-01) Li, L; Zhang, XX; Zhou, M; Carin, LA tree-based dictionary learning model is developed for joint analysis of imagery and associated text. The dictionary learning may be applied directly to the imagery from patches, or to general feature vectors extracted from patches or superpixels (using any existing method for image feature extraction). Each image is associated with a path through the tree (from root to a leaf), and each of the multiple patches in a given image is associated with one node in that path. Nodes near the tree root are shared between multiple paths, representing image characteristics that are common among different types of images. Moving toward the leaves, nodes become specialized, representing details in image classes. If available, words (text) are also jointly modeled, with a path-dependent probability over words. The tree structure is inferred via a nested Dirichlet process, and a retrospective stick-breaking sampler is used to infer the tree depth and width.Item Unknown Non-Gaussian discriminative factor models via the max-margin rank-likelihood(32nd International Conference on Machine Learning, ICML 2015, 2015-01-01) Yuan, X; Henao, R; Tsalik, EL; Langley, RJ; Carin, LCopyright © 2015 by the author(s).We consider the problem of discriminative factor analysis for data that are in general non-Gaussian. A Bayesian model based on the ranks of the data is proposed. We first introduce a new max-margin version of the rank-likelihood. A discriminative factor model is then developed, integrating the max-margin rank-likelihood and (linear) Bayesian support vector machines, which are also built on the max-margin principle. The discriminative factor model is further extended to the nonlinear case through mixtures of local linear classifiers, via Dirichlet processes. Fully local conjugacy of the model yields efficient inference with both Markov Chain Monte Carlo and variational Bayes approaches. Extensive experiments on benchmark and real data demonstrate superior performance of the proposed model and its potential for applications in computational biology.Item Unknown Task-driven adaptive statistical compressive sensing of gaussian mixture models(IEEE Transactions on Signal Processing, 2013-01-21) Duarte-Carvajalino, JM; Yu, G; Carin, L; Sapiro, GA framework for adaptive and non-adaptive statistical compressive sensing is developed, where a statistical model replaces the standard sparsity model of classical compressive sensing. We propose within this framework optimal task-specific sensing protocols specifically and jointly designed for classification and reconstruction. A two-step adaptive sensing paradigm is developed, where online sensing is applied to detect the signal class in the first step, followed by a reconstruction step adapted to the detected class and the observed samples. The approach is based on information theory, here tailored for Gaussian mixture models (GMMs), where an information-theoretic objective relationship between the sensed signals and a representation of the specific task of interest is maximized. Experimental results using synthetic signals, Landsat satellite attributes, and natural images of different sizes and with different noise levels show the improvements achieved using the proposed framework when compared to more standard sensing protocols. The underlying formulation can be applied beyond GMMs, at the price of higher mathematical and computational complexity. © 1991-2012 IEEE.