Security and Robustness in Neuromorphic Computing and Deep Learning

Loading...
Thumbnail Image

Date

2020

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

266
views
423
downloads

Abstract

Machine learning (ML) has been promoting fast in the recent decade. Among many ML algorithms, inspired by biological neural systems, neural networks (NNs) and neuromorphic computing systems (NCSs) achieve state-of-the-art performance. With the development of computing resources and big data, deep neural networks (DNNs), also known as deep learning (DL), are applied in various applications such as image recognition and detection, feature extraction, and natural language processing. However, novel security threats are introduced in these applications. Attackers are trying to steal, bug, and destroy the models, thus incurring immeasurable losses. However, we do not fully understand these threats yet, due to the reason that NNs are black boxes and under active development. The complexity of NNs also exposes more vulnerabilities than traditional ML algorithms. To solve the above security threats, this dissertation focuses on identifying novel security threats against NNs and revisiting traditional issues from NNs' perspective. We also grasp the key to these attacks and explore variations and develop robust defenses against them.

One of our works aims at preventing attackers with physical access from learning the proprietary algorithm implemented by the neuromorphic hardware, i.e., replication attack. For this purpose, we leverage the obsolescence effect in memristors to judiciously reduce the accuracy of outputs for any unauthorized user. Our methodology is verified to be compatible with mainstream classification applications, memristor devices, and security and performance constraints. In many applications, public data may be poisoned when being collected as the inputs for re-training DNNs. Although poisoning attack against support vector machines (SVMs) has been extensively studied, we still have very limited knowledge and understanding about how such an attack can be implemented against neural networks. Thus, we examine the possibility of directly applying a gradient-based method to generate poisoned samples against neural networks. We then propose a generative method to accelerate the generation of poisoned samples while maintaining a high attack efficiency. Experiment results show that the generative method can significantly accelerate the generation rate of the poisoned samples compared with the numerical gradient method, with marginal degradation on model accuracy. Deepfake represents a category of face-swapping attacks that leverage machine learning models such as autoencoders or generative adversarial networks. Various detection techniques for Deepfake attacks have been explored. These methods, however, are passive measures against Deepfakes as they are mitigation strategies after the high-quality fake content is generated. More importantly, we would like to think ahead of the attackers with robust defenses. This work aims to take an offensive measure to impede the generation of high-quality fake images or videos. We propose to use novel transformation-aware adversarially perturbed faces as a defense against GAN-based Deepfake attacks. Additionally, we explore techniques for data preprocessing and augmentation to enhance models' robustness. Specifically, we leverage convolutional neural networks (CNNs) to automate the wafer inspection process and propose several techniques to preprocess and augment wafer images for enhancing our model's generalization on unseen wafers (e.g., from other fabs).

Department

Electrical and Computer Engineering

Description

Provenance

Subjects

Computer engineering, Attack, Deep learning, Defense, Machine learning, Neuromorphic computing, Security

Citation

Citation

Yang, Chaofei (2020). Security and Robustness in Neuromorphic Computing and Deep Learning. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/21528.

Collections


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.