Advances in Log-concave Sampling and Domain Adaptation

Loading...
Thumbnail Image

Date

2023

Authors

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

2
views
7
downloads

Abstract

In the vast field of machine learning, the study of distributions and the innovation of algorithms driving from them serve as foundation pursuits. This dissertation delves into two critical topics tied to distributions: log-concave sampling and domain adaptation. Our research encompasses both theoretical and methodological developments in these fields, supported by thorough numerical experiments.

Beginning with log-concave sampling, we establish the minimax mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for sampling from a log-smooth and strongly log-concave distribution. Our proof is divided into two main components: an upper bound and a lower bound. First, for a d-dimensional log-concave density with condition number $\kappa$, we show that MALA with a warm start mixes in $\tilde O(\kappa\sqrt(d))$ iterations up to logarithmic factors. Second, we prove a spectral gap based mixing time lower bound for reversible MCMC algorithms on general state spaces. We apply this lower bound result to construct a hard distribution for which MALA requires at least $\tilde\Omega(\kappa\sqrt{d})$ steps to mix. The lower bound aligns with our upper bound in terms of condition number and dimension, thereby demonstrating the minimax mixing time of MALA for log-concave sampling.

Shifting focus to Domain Adaptation (DA), we tackle the statistical learning problem where the distribution of the source data used to train a model differs from that of the target data used to evaluate the model. Our work is grounded in assumption of the presence of conditionally invariant components (CICs) --- feature representations that are relevant for prediction and remain conditionally invariant across the source and target distributions. We demonstrate three prominent roles of CICs in providing target risk guarantees in DA. First, we propose a new algorithm based on CICs, importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs help identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm can address its failure scenario caused by label-flipping features.

Through rigorous analysis, we advance insights into log-concave sampling and domain adaptation. Our exploration underscores the importance of probabilistic understanding in designing algorithms tailored to intricate data distributions encountered in machine learning.

Description

Provenance

Citation

Citation

Wu, Keru (2023). Advances in Log-concave Sampling and Domain Adaptation. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/30287.

Collections


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.