Abstract

Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.

Keywords

Computer scienceSemantics (computer science)Artificial intelligenceImage (mathematics)Natural language processingObject (grammar)SalientCode (set theory)Computer visionProgramming language

Affiliated Institutions

Related Publications

Publication Info

Year
2020
Type
book-chapter
Pages
121-137
Citations
1430
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1430
OpenAlex
250
Influential

Cite This

Xiujun Li, Xi Yin, Chunyuan Li et al. (2020). Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. Lecture notes in computer science , 121-137. https://doi.org/10.1007/978-3-030-58577-8_8

Identifiers

DOI
10.1007/978-3-030-58577-8_8
PMID
41328267
PMCID
PMC12664865
arXiv
2004.06165

Data Quality

Data completeness: 79%