Efficient and Generalizable Neural Architecture Search for Visual Recognition
| dc.contributor.advisor | Chen, Yiran | |
| dc.contributor.author | Cheng, Hsin-Pai | |
| dc.date.accessioned | 2021-09-14T15:09:17Z | |
| dc.date.available | 2021-09-14T15:09:17Z | |
| dc.date.issued | 2021 | |
| dc.department | Electrical and Computer Engineering | |
| dc.description.abstract | Neural Architecture Search (NAS) can achieve accuracy superior to human-designed neural networks, because of the easier automation process and searching techniques.While automated designed neural architectures can achieve new state-of-the-art performance with less human crafting efforts, there are three obstacles which hinder us building the next generation NAS algorithms: (1) search space is constrained which limits their representation ability; (2) searching large search space is time costly which slows down the model crafting process; (3) inference of complicated neural architectures are slow which limits the deployability on different devices To improve search space, previous NAS works rely on existing block motifs. Specifically, previous search space seek the best combination of MobileNetV2 blocks without exploring the sophisticated cell connections. To accelerate searching process, more accurate description of neural architecture is necessary. To deploy neural architectures to hardware, better adaptability is required. The dissertation proposes ScaleNAS to expand a search space that is adaptable to multiple vision-based tasks. The dissertation will show that NASGEM overcomes the neural architecture representation ability to accelerate searching. Finally, we shows how to integrate neural architecture search to strucural pruning and mixed precision quantization to further improve hardware deployment. | |
| dc.identifier.uri | ||
| dc.subject | Computer engineering | |
| dc.title | Efficient and Generalizable Neural Architecture Search for Visual Recognition | |
| dc.type | Dissertation |