Leveraging Data Augmentation in Limited-Label Scenarios for Improved Generalization
Abstract
The resurgence of Convolutional Neural Networks (CNNs) from the early foundational work is largely attributed to the advent of extensive manually labeled datasets, which has made it possible to train high-capacity models with strong generalization capabilities. However, the annotation cost for these datasets is often prohibitive, and so training CNNs on limited data in a fully-supervised setting remains a crucial problem. Data augmentation is a promising direction for improving generalization in scarce data settings.
We study foundational augmentation techniques, including Mixed Sample Data Augmentations (MSDAs) and a no-parameter variant of RandAugment termed Preset-RandAugment, in the fully supervised scenario. We observe that Preset-RandAugment excels in limited-data contexts while MSDAs are moderately effective. In order to explain this behaviour, we refine ideas about diversity and realism from prior work and propose new ways to measure them. We postulate an additional property when data is limited: augmentations should encourage faster convergence by helping the model learn stable and invariant low-level features, focusing on less class-specific patterns. We explain the effectiveness of Preset-RandAugment in terms of these properties and identify low-level feature transforms as a key contributor to performance.
Building on these insights, we introduce a novel augmentation technique called RandMSAugment that integrates complementary strengths of existing methods. It combines low-level feature transforms from Preset-RandAugment with interpolation and cut-and-paste from MSDA. We improve image diversity through added stochasticity in the mixing process. RandMSAugment significantly outperforms the competition on CIFAR-100, STL-10, and Tiny-Imagenet. With very small training sets (4, 25, 100 samples/class), RandMSAugment achieves compelling performance gains between 4.1\% and 6.75\%. Even with more training data (500 samples/class) we improve performance by 1.03\% to 2.47\%. We also incorporate RandMSAugment augmentations into a semi-supervised learning (SSL) framework and show promising improvements over the state-of-the-art SSL method, FlexMatch. The improvements are more significant when the number of labeled samples is smaller. RandMSAugment does not require hyperparameter tuning, extra validation data, or cumbersome optimizations.
Finally, we combine RandMSAugment with another powerful generalization tool, ensembling, for fully-supervised training with limited samples. We show additonal improvements on the 3 classification benchmarks, which range between 2\% and 5\%. We empirically demonstrate that the gains due to ensembling are larger when the individual networks have moderate accuracies \ie outside of the low and high extremes.Furthermore, we introduce a simulation tool capable of providing insights about the maximum accuracy achievable through ensembling, under various conditions.
Type
Department
Description
Provenance
Citation
Permalink
Citation
Ravindran, Swarna Kamlam (2024). Leveraging Data Augmentation in Limited-Label Scenarios for Improved Generalization. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/30343.
Collections
Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.