Abstract

In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https://github.com/mil-tokyo/MCD_DA.

Keywords

DiscriminatorComputer scienceClassifier (UML)Artificial intelligencePattern recognition (psychology)Decision boundaryDomain adaptationGenerator (circuit theory)SegmentationSource codeFeature extractionMachine learningContextual image classificationImage (mathematics)Power (physics)Detector

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
preprint
Pages
3723-3732
Citations
2122
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2122
OpenAlex
328
Influential
1577
CrossRef

Cite This

Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku et al. (2018). Maximum Classifier Discrepancy for Unsupervised Domain Adaptation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , 3723-3732. https://doi.org/10.1109/cvpr.2018.00392

Identifiers

DOI
10.1109/cvpr.2018.00392
arXiv
1712.02560

Data Quality

Data completeness: 84%