Privacy and Robustness of Deep Neural Networks

Loading...
Thumbnail Image

Date

2021

Authors

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

292
views
395
downloads

Abstract

Concerns related to security and confidentiality have been raised when applying ma- chine learning to real-world applications. In this dissertation, I mainly discuss ap- proaches to defending models against membership inference attacks and adversarial attacks. Membership inference attacks attempt to infer the training set information from observing the model output, while adversarial attacks aim to alter the output of models via introducing minimal perturbations in the input.This dissertation consists of three main parts. In the first part, I show that stochastic gradient Markov chain Monte Carlo (SG-MCMC) – a class of scalable Bayesian posterior sampling algorithms – satisfies strong differential privacy, when carefully chosen stepsizes are employed. I develop theory on the performance of the proposed differentially-private SG-MCMC method and conduct experiments to support the analysis, showing that a standard SG-MCMC sampler with minor mod- ification can reach state-of-the-art performance in terms of both privacy and utility on Bayesian learning. In the second part, I introduce a framework that is scalable and provides certified bounds on the norm of the input manipulation for constructing adversarial examples. A connection between robustness against adversarial pertur- bation and additive random noise is established, and a training strategy that can significantly improve the certified bounds is proposed. In the third part, I conduct experiments to understand the behavior of fast adversarial training. Fast adversar- ial training is a promising approach that remarkably reduces computation time for adversarially robust training, yet it can only run for a limited number of training epochs, resulting in sub-optimal performance. I show the key to its success is the ability to recover from overfitting to weak attacks. I then extend the findings to improve fast adversarial training, demonstrating superior robust accuracy to strong adversarial training, with much-reduced training time.

Description

Provenance

Citation

Citation

Li, Bai (2021). Privacy and Robustness of Deep Neural Networks. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/23002.

Collections


Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.