Abstract

We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. To achieve this result, we first train an EfficientNet model on labeled ImageNet images and use it as a teacher to generate pseudo labels on 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the generation of the pseudo labels, the teacher is not noised so that the pseudo labels are as accurate as possible. However, during the learning of the student, we inject noise such as dropout, stochastic depth and data augmentation via RandAugment to the student so that the student generalizes better than the teacher.

Keywords

Robustness (evolution)Computer scienceArtificial intelligenceDropout (neural networks)Training setMachine learningNoise (video)Process (computing)Training (meteorology)Contextual image classificationPattern recognition (psychology)Image (mathematics)

Affiliated Institutions

Related Publications

Publication Info

Year
2020
Type
article
Pages
10684-10695
Citations
2154
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2154
OpenAlex
263
Influential
1456
CrossRef

Cite This

Qizhe Xie, Minh-Thang Luong, Eduard Hovy et al. (2020). Self-Training With Noisy Student Improves ImageNet Classification. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 10684-10695. https://doi.org/10.1109/cvpr42600.2020.01070

Identifiers

DOI
10.1109/cvpr42600.2020.01070
arXiv
1911.04252

Data Quality

Data completeness: 84%