Abstract

Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.

Keywords

Computer scienceInvariant (physics)Artificial intelligencePattern recognition (psychology)Domain adaptationCognitive neuroscience of visual object recognitionDomain (mathematical analysis)Projection (relational algebra)Benchmark (surveying)Feature extractionAlgorithmMathematicsClassifier (UML)

Affiliated Institutions

Related Publications

Publication Info

Year
2013
Type
article
Citations
463
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

463
OpenAlex

Cite This

Mahsa Baktashmotlagh, Mehrtash Harandi, Brian C. Lovell et al. (2013). Unsupervised Domain Adaptation by Domain Invariant Projection. . https://doi.org/10.1109/iccv.2013.100

Identifiers

DOI
10.1109/iccv.2013.100