Abstract

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.

Keywords

Computer scienceArtificial intelligenceMachine learningTransformerSegmentationSupervised learningEncoderPattern recognition (psychology)Artificial neural networkEngineering

Affiliated Institutions

Related Publications

A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification...

2022 2022 IEEE/CVF Conference on Computer ... 5683 citations

Publication Info

Year
2021
Type
article
Pages
9630-9640
Citations
4220
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

4220
OpenAlex

Cite This

Mathilde Caron, Hugo Touvron, Ishan Misra et al. (2021). Emerging Properties in Self-Supervised Vision Transformers. 2021 IEEE/CVF International Conference on Computer Vision (ICCV) , 9630-9640. https://doi.org/10.1109/iccv48922.2021.00951

Identifiers

DOI
10.1109/iccv48922.2021.00951