Abstract

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.

Keywords

Computer scienceArtificial intelligencePretextMachine learningEmbeddingPopularitySupervised learningSample (material)Natural language processingArtificial neural networkPsychology

Affiliated Institutions

Related Publications

Universal Sentence Encoder

We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate pe...

2018 arXiv (Cornell University) 1289 citations

Publication Info

Year
2020
Type
article
Citations
1396
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1396
OpenAlex

Cite This

Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh et al. (2020). A Survey on Contrastive Self-Supervised Learning. MDPI (MDPI AG) . https://doi.org/10.3390/technologies9010002

Identifiers

DOI
10.3390/technologies9010002