Robustness Analysis and Improvement in Neural Networks and Neuromorphic Computing

dc.contributor.advisor

Li, Hai

dc.contributor.author

Song, Chang

dc.date.accessioned

2021-05-19T18:08:59Z

dc.date.available

2021-05-19T18:08:59Z

dc.date.issued

2021

dc.department

Electrical and Computer Engineering

dc.description.abstract

Deep learning and neural networks have great potential while still at risk. The so-called adversarial attacks, which apply small perturbations on input samples to fool models, threaten the reliability of neural networks and their hardware counterparts, neuromorphic computing. To solve such issues, various attempts are made, including adversarial training and other data augmentation methods.In our early attempt to defend adversarial attacks, we propose a multi-strength adversarial training method to cover a wider effective range than typical single-strength adversarial training. Furthermore, we also propose two different structures in order to compensate for the tradeoff between the total training time and the hardware implementation cost. Experimental results show that our proposed method gives better accuracy than the baselines with tolerable additional hardware cost. To better understand robustness, we analyze the adversarial problem in the decision space. In one of our defense approaches called feedback learning, we theoretically prove the effectiveness of adversarial training and other data augmentation method. For empirical proof, we generate non-adversarial examples based on the information of the decision boundaries of neural networks and add these examples in training. The results show that the boundaries of the models are more robust to noises and perturbations after applying feedback learning than baselines. Besides algorithm-level concerns, we also focus on hardware implementations in quantization scenarios. We find that adversarially-trained neural networks are more vulnerable to quantization loss than plain models. To improve the robustness of hardware-based quantized models, we explore methods such as feedback learning, nonlinear mapping, and layer-wise quantization. Results show that the adversarial and quantization robustness can be improved by feedback learning and nonlinear mapping, respectively. But the accuracy gap introduced by quantization can be further minimized. To minimize both losses simultaneously, we also propose a layer-wise adversarial-aware quantization method to choose the best quantization parameter settings for adversarially-trained models. In this method, we use the Lipschitz constant of different layers as error sensitivity metrics and design several criteria to decide the quantization settings for each layer. The results show that our method can further minimize the accuracy gap between full-precision and quantized adversarially-trained models.

dc.identifier.uri

https://hdl.handle.net/10161/23108

dc.subject

Computer engineering

dc.subject

adversarial attacks

dc.subject

Neural networks

dc.subject

Neuromorphic computing

dc.subject

Quantization

dc.subject

Robustness

dc.subject

Security

dc.title

Robustness Analysis and Improvement in Neural Networks and Neuromorphic Computing

dc.type

Dissertation

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Song_duke_0066D_16246.pdf
Size:
2.33 MB
Format:
Adobe Portable Document Format

Collections