ABSTRACT
Unsupervised domain adaptation (UDA) attempts to transfer specific knowledge from one domain with labeled data to another domain without labels. Recently, maximum squares loss has been proposed to tackle UDA problem but it does not consider the prediction diversity which has proven beneficial to UDA. In this paper, we propose a novel normalized squares maximization (NSM) loss in which the maximum squares is normalized by the sum of squares of class sizes. The normalization term enforces the class sizes of predictions to be balanced to explicitly increase the diversity. Theoretical analysis shows that the optimal solution to NSM is one-hot vectors with balanced class sizes, i.e., NSM encourages both discriminate and diverse predictions. We further propose a robust variant of NSM, RNSM, by replacing the square loss with L2,1-norm to reduce the influence of outliers and noises. Experiments of cross-domain image classification on two benchmark datasets illustrate the effectiveness of both NSM and RNSM. RNSM achieves promising performance compared to state-of-the-art methods. The code is available at https://github.com/wj-zhang/NSM.
Supplemental Material
- Minghao Chen, Hongyang Xue, and Deng Cai. 2019. Domain adaptation for semantic segmentation with maximum squares loss. In ICCV. 2090--2099.Google Scholar
- Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. 2020. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In CVPR. 3941--3950.Google Scholar
- Chris Ding, Ding Zhou, Xiaofeng He, and Hongyuan Zha. 2006. R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization. In ICML. 281--288.Google Scholar
- Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francc ois Laviolette, Mario March, and Victor Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. JMLR, Vol. 17, 59 (2016), 1--35.Google ScholarDigital Library
- Yves Grandvalet and Yoshua Bengio. 2005. Semi-supervised learning by entropy minimization. In NeurIPS. 529--536.Google Scholar
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770--778.Google Scholar
- Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning Transferable Features with Deep Adaptation Networks. In ICML. 97--105.Google Scholar
- Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. 2018. Conditional adversarial domain adaptation. In NeurIPS. 1640--1650.Google Scholar
- Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2017. Deep transfer learning with joint adaptation networks. In ICML. 2208--2217.Google Scholar
- Feiping Nie, Heng Huang, Xiao Cai, and Chris H Ding. 2010. Efficient and robust feature selection via joint L2,1-norms minimization. In NeurIPS. 1813--1821.Google Scholar
- Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. 2010. Adapting visual category models to new domains. In ECCV. 213--226.Google Scholar
- Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep hashing network for unsupervised domain adaptation. In CVPR. 5018--5027.Google Scholar
- Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. 2019. Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. In ICCV. 1426--1435.Google Scholar
- Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. 2019. Domain-symmetric networks for adversarial domain adaptation. In CVPR. 5031--5040.Google Scholar
Index Terms
- Robust Normalized Squares Maximization for Unsupervised Domain Adaptation
Recommendations
Cross-domain feature enhancement for unsupervised domain adaptation
AbstractTill the present, the domain adaptation has been widely researched by transferring the knowledge from a labeled source domain to an unlabeled target domain. Adversarial adaptation methods have achieved great success, learning domain-invariant ...
Towards Corruption-Agnostic Robust Domain Adaptation
Great progress has been achieved in domain adaptation in decades. Existing works are always based on an ideal assumption that testing target domains are independent and identically distributed with training target domains. However, due to unpredictable ...
ROAD: Robust Unsupervised Domain Adaptation with Noisy Labels
MM '23: Proceedings of the 31st ACM International Conference on MultimediaIn recent years, Unsupervised Domain Adaptation (UDA) has emerged as a popular technique for transferring knowledge from a labeled source domain to an unlabeled target domain. However, almost all of the existing approaches implicitly assume that the ...
Comments