Efficient and Generalizable Neural Architecture Search for Visual Recognition
Neural Architecture Search (NAS) can achieve accuracy superior to human-designed neural networks, because of the easier automation process and searching techniques.While automated designed neural architectures can achieve new state-of-the-art performance with less human crafting efforts, there are three obstacles which hinder us building the next generation NAS algorithms: (1) search space is constrained which limits their representation ability; (2) searching large search space is time costly which slows down the model crafting process; (3) inference of complicated neural architectures are slow which limits the deployability on different devices To improve search space, previous NAS works rely on existing block motifs. Specifically, previous search space seek the best combination of MobileNetV2 blocks without exploring the sophisticated cell connections. To accelerate searching process, more accurate description of neural architecture is necessary. To deploy neural architectures to hardware, better adaptability is required. The dissertation proposes ScaleNAS to expand a search space that is adaptable to multiple vision-based tasks. The dissertation will show that NASGEM overcomes the neural architecture representation ability to accelerate searching. Finally, we shows how to integrate neural architecture search to strucural pruning and mixed precision quantization to further improve hardware deployment.

This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Rights for Collection: Duke Dissertations
Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info