skip to main content
10.1145/3340531.3412083acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper

Robust Normalized Squares Maximization for Unsupervised Domain Adaptation

Authors Info & Claims
Published:19 October 2020Publication History

ABSTRACT

Unsupervised domain adaptation (UDA) attempts to transfer specific knowledge from one domain with labeled data to another domain without labels. Recently, maximum squares loss has been proposed to tackle UDA problem but it does not consider the prediction diversity which has proven beneficial to UDA. In this paper, we propose a novel normalized squares maximization (NSM) loss in which the maximum squares is normalized by the sum of squares of class sizes. The normalization term enforces the class sizes of predictions to be balanced to explicitly increase the diversity. Theoretical analysis shows that the optimal solution to NSM is one-hot vectors with balanced class sizes, i.e., NSM encourages both discriminate and diverse predictions. We further propose a robust variant of NSM, RNSM, by replacing the square loss with L2,1-norm to reduce the influence of outliers and noises. Experiments of cross-domain image classification on two benchmark datasets illustrate the effectiveness of both NSM and RNSM. RNSM achieves promising performance compared to state-of-the-art methods. The code is available at https://github.com/wj-zhang/NSM.

Skip Supplemental Material Section

Supplemental Material

3340531.3412083.mp4

mp4

48.6 MB

References

  1. Minghao Chen, Hongyang Xue, and Deng Cai. 2019. Domain adaptation for semantic segmentation with maximum squares loss. In ICCV. 2090--2099.Google ScholarGoogle Scholar
  2. Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. 2020. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In CVPR. 3941--3950.Google ScholarGoogle Scholar
  3. Chris Ding, Ding Zhou, Xiaofeng He, and Hongyuan Zha. 2006. R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization. In ICML. 281--288.Google ScholarGoogle Scholar
  4. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francc ois Laviolette, Mario March, and Victor Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. JMLR, Vol. 17, 59 (2016), 1--35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yves Grandvalet and Yoshua Bengio. 2005. Semi-supervised learning by entropy minimization. In NeurIPS. 529--536.Google ScholarGoogle Scholar
  6. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. 770--778.Google ScholarGoogle Scholar
  7. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning Transferable Features with Deep Adaptation Networks. In ICML. 97--105.Google ScholarGoogle Scholar
  8. Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. 2018. Conditional adversarial domain adaptation. In NeurIPS. 1640--1650.Google ScholarGoogle Scholar
  9. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2017. Deep transfer learning with joint adaptation networks. In ICML. 2208--2217.Google ScholarGoogle Scholar
  10. Feiping Nie, Heng Huang, Xiao Cai, and Chris H Ding. 2010. Efficient and robust feature selection via joint L2,1-norms minimization. In NeurIPS. 1813--1821.Google ScholarGoogle Scholar
  11. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. 2010. Adapting visual category models to new domains. In ECCV. 213--226.Google ScholarGoogle Scholar
  12. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep hashing network for unsupervised domain adaptation. In CVPR. 5018--5027.Google ScholarGoogle Scholar
  13. Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. 2019. Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. In ICCV. 1426--1435.Google ScholarGoogle Scholar
  14. Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. 2019. Domain-symmetric networks for adversarial domain adaptation. In CVPR. 5031--5040.Google ScholarGoogle Scholar

Index Terms

  1. Robust Normalized Squares Maximization for Unsupervised Domain Adaptation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management
      October 2020
      3619 pages
      ISBN:9781450368599
      DOI:10.1145/3340531

      Copyright © 2020 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 October 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      Overall Acceptance Rate1,861of8,427submissions,22%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader