Abstract

In this paper, we investigate how to establish the relationship between semantic concepts based on the large-scale realworld click data from image commercial engine, which is a challenging topic because the click data suffers from the noise such as typos, the same concept with different queries, etc. We first define five specific relationships between concepts. We then extract some concept relationship features in textual and visual domain to train the concept relationship models. The relationship of each pair of concepts will thus be classified into one of the five special relationships. We study the efficacy of the conceptual relationships by applying them to augment imperfect image tags, i.e., improve representative power. We further employ a sophisticated hashing approach to transform augmented image tags into binary codes, which are subsequently used for content-based image retrieval task. Experimental results on NUS-WIDE dataset demonstrate the superiority of our proposed approach as compared to state-of-the-art methods.

Keywords

Computer scienceHash functionInformation retrievalImage retrievalImage (mathematics)Semantics (computer science)Domain (mathematical analysis)Closed captioningArtificial intelligenceVisualizationNatural language processing

Affiliated Institutions

Related Publications

Publication Info

Year
2015
Type
article
Volume
1
Issue
4
Pages
152-161
Citations
99
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

99
OpenAlex

Cite This

Richang Hong, Yang Yang, Meng Wang et al. (2015). Learning Visual Semantic Relationships for Efficient Visual Retrieval. IEEE Transactions on Big Data , 1 (4) , 152-161. https://doi.org/10.1109/tbdata.2016.2515640

Identifiers

DOI
10.1109/tbdata.2016.2515640