Abstract
Out-of-distribution detection seeks to identify novelties, samples that deviate from the norm. The task has been found to be quite challenging, particularly in the case where the normal data distribution consist of multiple semantic classes (e.g. multiple object categories). To overcome this challenge, current approaches require manual labeling of the normal images provided during training. In this work, we tackle multi-class novelty detection without class labels. Our simple but effective solution consists of two stages: we first discover “pseudo-class” labels using unsupervised clustering. Then using these pseudo-class labels, we are able to use standard supervised out-of-distribution detection methods. We verify the performance of our method by favorable comparison to the state-of-the-art, and provide extensive analysis and ablations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bergman, L., Hoshen, Y.: Classification-based anomaly detection for general data. In: ICLR (2020)
Bossard, L., Guillaumin, M., Van Gool, L.: Food-101 – Mining Discriminative Components with Random Forests. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 446–461. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_29
Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep Clustering for Unsupervised Learning of Visual Features. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 139–156. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_9
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. (2020) arXiv preprint arXiv:2003.04297
Cohen, N., Hoshen, Y.: Sub-Image anomaly detection with deep pyramid correspondences. (2020) arXiv preprint arXiv:2005.02357
Deecke, L., Ruff, L., Vandermeulen, R.A., Bilen, H.: Transfer-based semantic anomaly detection. In: International Conference on Machine Learning, pp. 2546–2558. PMLR (2021)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database. In: 2009 IEEE conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Dosovitskiy, A., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. (2020) arXiv preprint arXiv:2010.11929
Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. In: 2004 Conference on Computer Vision and Pattern Recognition Workshop, pp. 178–178. IEEE (2004)
Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. (2021) arXiv preprint arXiv:2106.03004
Golan, I., El-Yaniv, R.: Deep anomaly detection using geometric transformations. (2018) arXiv preprint arXiv:1805.10917
Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. (2016) arXiv preprint arXiv:1610.02136
Hendrycks, D., Mazeika, M., Kadavath, S., Song, D.: Using self-supervised learning can improve model robustness and uncertainty. (2019) arXiv preprint arXiv:1906.12340
Ji, X., Henriques, J.F., Vedaldi, A.: Invariant information clustering for unsupervised image classification and segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9865–9874 (2019)
Kingma, D.P., Welling, M.: Auto-encoding variational bayes. (2013) arXiv preprint arXiv:1312.6114
Kolesnikov, A., et al.: Big Transfer (BiT): General Visual Representation Learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12350, pp. 491–507. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58558-7_29
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Advances in Neural Information Processing Systems. 31 (2018)
Li, C.L., Sohn, K., Yoon, J., Pfister, T.: Cutpaste: self-supervised learning for anomaly detection and localization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9664–9674 (2021)
Li, L., Hansman, R.J., Palacios, R., Welsch, R.: Anomaly detection via a gaussian mixture model for flight operation and safety monitoring. Trans. Res. Part C Emerg. Technol. 64, 45–57 (2016)
Matena, M., Raffel, C.: Merging models with fisher-weighted averaging. (2021) arXiv preprint arXiv:2111.09832
McLachlan, G.J.: Iterative reclassification procedure for constructing an asymptotically optimal rule of allocation in discriminant analysis. J. Am. Stat. Assoc. 70(350), 365–369 (1975)
Münz, G., Li, S., Carle, G.: Traffic anomaly detection using k-means clustering. In: GI/ITG Workshop MMBnet, pp. 13–14 (2007)
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)
Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: Indian Conference on Computer Vision, Graphics and Image Processing, pp. 722-729 (2008)
Parkhi, O.M., Vedaldi, A., Zisserman, A., Jawahar, C.V.: Cats and dogs. In: IEEE Conference on Computer Vision and Pattern Recognition (2012)
Perera, P., Patel, V.M.: Learning deep features for one-class classification. IEEE Trans. Image Process. 28(11), 5450–5463 (2019)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Reiss, T., Cohen, N., Bergman, L., Hoshen, Y.: Panda: adapting pretrained features for anomaly detection and segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2806–2814 (2021)
Reiss, T., Hoshen, Y.: Mean-shifted contrastive loss for anomaly detection. (2021) arXiv preprint arXiv:2106.03844
Ren, J., et al.: Likelihood ratios for out-of-distribution detection. (2019) arXiv preprint arXiv:1906.02845
Rippel, O., Chavan, A., Lei, C., Merhof, D.: Transfer learning gaussian anomaly detection by fine-tuning representations. (2021) arXiv preprint arXiv:2108.04116
Ruff, L., et al.: Deep one-class classification. In: International conference on machine learning, pp. 4393–4402. PMLR (2018)
Sehwag, V., Chiang, M., Mittal, P.: SSD: a unified framework for self-supervised outlier detection. (2021) arXiv preprint arXiv:2103.12051
Serrà, J., Álvarez, D., Gómez, V., Slizovskaia, O., Núñez, J.F., Luque, J.: Input complexity and out-of-distribution detection with likelihood-based generative models. (2019) arXiv preprint arXiv:1909.11480
Tack, J., Mo, S., Jeong, J., Shin, J.: Csi: novelty detection via contrastive learning on distributionally shifted instances. In: Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 11839–11852 (2020)
Van Gansbeke, W., Vandenhende, S., Georgoulis, S., Proesmans, M., Van Gool, L.: SCAN: Learning to Classify Images Without Labels. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 268–285. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_16
Wei, H., Xie, R., Cheng, H., Feng, L., An, B., Li, Y.: Mitigating neural network overconfidence with logit normalization. (2022) arXiv preprint arXiv:2205.09310
Welinder, P., et al.: Caltech-UCSD Birds 200. Tech. Rep. CNS-TR-2010-001, California Institute of Technology (2010)
Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. (2015) arXiv preprint arXiv:1506.03365
Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database 27 (2014)
Zong, B., et al.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: International Conference on Learning Representations (2018)
Acknowledgements
This work was partly supported by the Malvina and Solomon Pollack scholarship and, the Federmann Cyber Security Research Center in conjunction with the Israel National Cyber Directorate.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cohen, N., Abutbul, R., Hoshen, Y. (2023). Out-of-Distribution Detection Without Class Labels. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13802. Springer, Cham. https://doi.org/10.1007/978-3-031-25063-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-25063-7_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25062-0
Online ISBN: 978-3-031-25063-7
eBook Packages: Computer ScienceComputer Science (R0)