Skip to main content

Network Pruning via Feature Shift Minimization

  • Conference paper
  • First Online:
Computer Vision – ACCV 2022 (ACCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13841))

Included in the following conference series:

Abstract

Channel pruning is widely used to reduce the complexity of deep network models. Recent pruning methods usually identify which parts of the network to discard by proposing a channel importance criterion. However, recent studies have shown that these criteria do not work well in all conditions. In this paper, we propose a novel Feature Shift Minimization (FSM) method to compress CNN models, which evaluates the feature shift by converging the information of both features and filters. Specifically, we first investigate the compression efficiency with some prevalent methods in different layer-depths and then propose the feature shift concept. Then, we introduce an approximation method to estimate the magnitude of the feature shift, since it is difficult to compute it directly. Besides, we present a distribution-optimization algorithm to compensate for the accuracy loss and improve the network compression efficiency. The proposed method yields state-of-the-art performance on various benchmark networks and datasets, verified by extensive experiments. Our codes are available at: https://github.com/lscgx/FSM.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K.P., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2018)

    Article  Google Scholar 

  2. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. arXiv abs/1802.02611 (2018)

    Google Scholar 

  3. Chin, T.W., Ding, R., Zhang, C., Marculescu, D.: Towards efficient model compression via learned global ranking. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1515–1525 (2020)

    Google Scholar 

  4. Gao, S., Huang, F., Cai, W.T., Huang, H.: Network pruning via performance maximization. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9266–9276 (2021)

    Google Scholar 

  5. Girdhar, R., Tran, D., Torresani, L., Ramanan, D.: Distinit: learning video representations without a single labeled video. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 852–861 (2019)

    Google Scholar 

  6. Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  7. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: AISTATS (2011)

    Google Scholar 

  8. Guo, J., et al.: Hit-detector: hierarchical trinity architecture search for object detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11402–11411 (2020)

    Google Scholar 

  9. Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural network. arXiv abs/1506.02626 (2015)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034 (2015)

    Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  12. He, Y., Ding, Y., Liu, P., Zhu, L., Zhang, H., Yang, Y.: Learning filter pruning criteria for deep convolutional neural networks acceleration. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2006–2015 (2020)

    Google Scholar 

  13. He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks. arXiv abs/1808.06866 (2018)

    Google Scholar 

  14. He, Y., Liu, P., Wang, Z., Yang, Y.: Pruning filter via geometric median for deep convolutional neural networks acceleration. arXiv abs/1811.00250 (2018)

    Google Scholar 

  15. He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1398–1406 (2017)

    Google Scholar 

  16. Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv abs/1503.02531 (2015)

    Google Scholar 

  17. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv abs/1704.04861 (2017)

    Google Scholar 

  18. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017)

    Google Scholar 

  19. Huang, Z., Wang, N.: Data-driven sparse structure selection for deep neural networks. arXiv abs/1707.01213 (2018)

    Google Scholar 

  20. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv abs/1502.03167 (2015)

    Google Scholar 

  21. Jiang, L., Xu, M., Liu, T., Qiao, M., Wang, Z.: DeepVS: a deep learning based video saliency prediction approach. In: ECCV (2018)

    Google Scholar 

  22. Kang, M., Han, B.: Operation-aware soft channel pruning using differentiable masks. In: ICML (2020)

    Google Scholar 

  23. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  24. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84–90 (2012)

    Article  Google Scholar 

  25. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. arXiv abs/1608.08710 (2017)

    Google Scholar 

  26. Li, Y., Gu, S., Zhang, K., Gool, L.V., Timofte, R.: DHP: differentiable meta pruning via hypernetworks. arXiv abs/2003.13683 (2020)

    Google Scholar 

  27. Li, Y., et al.: Towards compact CNNs via collaborative compression. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6434–6443 (2021)

    Google Scholar 

  28. Lin, M., et al.: HRank: filter pruning using high-rank feature map. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1526–1535 (2020)

    Google Scholar 

  29. Lin, S., et al.: Towards optimal structured CNN pruning via generative adversarial learning. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2785–2794 (2019)

    Google Scholar 

  30. Lin, T., Liu, X., Li, X., Ding, E., Wen, S.: BMN: boundary-matching network for temporal action proposal generation. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3888–3897 (2019)

    Google Scholar 

  31. Liu, Z., et al.: Metapruning: meta learning for automatic neural network channel pruning. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3295–3304 (2019)

    Google Scholar 

  32. Liu, Z., Shen, Z., Savvides, M., Cheng, K.T.: Reactnet: towards precise binary neural network with generalized activation functions. arXiv abs/2003.03488 (2020)

    Google Scholar 

  33. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2755–2763 (2017)

    Google Scholar 

  34. Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5068–5076 (2017)

    Google Scholar 

  35. Oh, J., Kim, H., Baik, S., Hong, C., Lee, K.M.: Batch normalization tells you which filter is important. In: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3351–3360 (2022)

    Google Scholar 

  36. Paszke, A., et al.: Automatic differentiation in pytorch (2017)

    Google Scholar 

  37. Peng, H., Wu, J., Chen, S., Huang, J.: Collaborative channel pruning for deep networks. In: ICML (2019)

    Google Scholar 

  38. Raja, K.B., Raghavendra, R., Busch, C.: Obtaining stable iris codes exploiting low-rank tensor space and spatial structure aware refinement for better iris recognition. In: 2019 International Conference on Biometrics (ICB), pp. 1–8 (2019)

    Google Scholar 

  39. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2015)

    Article  Google Scholar 

  40. Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  41. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 640–651 (2017)

    Article  Google Scholar 

  42. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2015)

    Google Scholar 

  43. Singh, P., Verma, V.K., Rai, P., Namboodiri, V.P.: Leveraging filter correlations for deep model compression. In: 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 824–833 (2020)

    Google Scholar 

  44. Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015)

    Google Scholar 

  45. Wang, H., Qin, C., Zhang, Y., Fu, Y.: Neural pruning via growing regularization. In: International Conference on Learning Representations (2020)

    Google Scholar 

  46. Wang, Z., Li, C., Wang, X.: Convolutional neural network pruning with structural redundancy reduction. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14908–14917 (2021)

    Google Scholar 

  47. Wang, Z., Li, C., Wang, X.: Convolutional neural network pruning with structural redundancy reduction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14913–14922 (2021)

    Google Scholar 

  48. Wang, Z., et al.: Model pruning based on quantified similarity of feature maps. arXiv abs/2105.06052 (2021)

    Google Scholar 

  49. You, Z., Yan, K., Ye, J., Ma, M., Wang, P.: Gate decorator: global filter pruning method for accelerating deep convolutional neural networks. arXiv abs/1909.08174 (2019)

    Google Scholar 

  50. Zhang, Y., Gao, S., Huang, H.: Exploration and estimation for model compression. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 477–486 (2021)

    Google Scholar 

  51. Zhuang, Z., et al.: Discrimination-aware channel pruning for deep neural networks. In: NeurIPS (2018)

    Google Scholar 

Download references

Acknowledgements

The work was supported by National Natural Science Foundation of China (Grant Nos. 61976246 and U20A20227), Natural Science Foundation of Chongqing (Grant No. cstc2020jcyj-msxm X0385).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaofang Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Duan, Y., Zhou, Y., He, P., Liu, Q., Duan, S., Hu, X. (2023). Network Pruning via Feature Shift Minimization. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13841. Springer, Cham. https://doi.org/10.1007/978-3-031-26319-4_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26319-4_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26318-7

  • Online ISBN: 978-3-031-26319-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics