skip to main content
research-article

Logic Shrinkage: Learned Connectivity Sparsification for LUT-Based Neural Networks

Published:01 September 2023Publication History
Skip Abstract Section

Abstract

Field-programmable gate array (FPGA)–specific deep neural network (DNN) architectures using native lookup tables (LUTs) as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy trade-offs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this article, we propose the learned optimization of such LUT-based topologies, resulting in higher-efficiency designs than via the direct use of off-the-shelf, hand-designed networks. Existing implementations of this class of architecture require the manual specification of the number of inputs per LUT, K. Choosing appropriate K a priori is challenging. Doing so at even high granularity, for example, per layer, is a time-consuming and error-prone process that leaves FPGAs’ spatial flexibility underexploited. Furthermore, prior works see LUT inputs connected randomly, which does not guarantee a good choice of network topology. To address these issues, we propose logic shrinkage, a fine-grained netlist pruning methodology enabling K to be automatically learned for every LUT in a neural network targeted for FPGA inference. By removing LUT inputs determined to be of low importance, our method increases the efficiency of the resultant accelerators. Our GPU-friendly solution to LUT input removal is capable of processing large topologies during their training with negligible slowdown. With logic shrinkage, we improve the area and energy efficiency of the best-performing LUTNet implementation of the CNV network classifying CIFAR-10 by 1.54× and 1.31×, respectively, while matching its accuracy. This implementation also reaches 2.71× the area efficiency of an equally accurate, heavily pruned binary neural network (BNN). On ImageNet, with the Bi-Real Net architecture, employment of logic shrinkage results in a post-synthesis area reduction of 2.67× vs. LUTNet, allowing for implementation that was previously impossible on today’s largest FPGAs. We validate the benefits of logic shrinkage in the context of real application deployment by implementing a face mask detection DNN using a BNN, LUTNet, and logic-shrunk layers. Our results show that logic shrinkage results in area gains versus LUTNet (up to 1.20×) and equally pruned BNNs (up to 1.08×), along with accuracy improvements.

REFERENCES

  1. [1] Agrawal Tushar, Imran K., Figus Matteo, and Kirkpatrick C.. 2020. Automatically Detecting Personal Protective Equipment on Persons in Images Using Amazon Rekognition. https://aws.amazon.com/cn/blogs/machine-learning/automaticallydetecting-personal-protective-equipment-on-persons-in-images-using-amazon-rekognition/.Google ScholarGoogle Scholar
  2. [2] Cabani Adnane, Hammoudi Karim, Benhabiles Halim, and Melkemi Mahmoud. 2021. MaskedFace-Net–A dataset of correctly/incorrectly masked face images in the context of COVID-19. Smart Health 19 (2021), 100144.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Cheng Yu, Wang Duo, Zhou Pan, and Zhang Tao. 2018. Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Signal Processing Magazine 35, 1 (2018).Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Deng Jia, Dong Wei, Socher Richard, Li Jia, Li Kai, and Li Feifei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Fasfous Nael, Vemparala Manoj-Rohit, Frickenstein Alexander, Frickenstein Lukas, Badawy Mohamed, and Stechele Walter. 2021. BinaryCoP: Binary neural network-based COVID-19 face-mask wear and positioning predictor on edge devices. In IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW’21). IEEE, 108115.Google ScholarGoogle Scholar
  6. [6] Ge Shiming, Li Jia, Ye Qiting, and Luo Zhao. 2017. Detecting masked faces in the wild with LLE-CNNs. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 26822690.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Ghasemzadeh Mohammad, Samragh Mohammad, and Koushanfar Farinaz. 2018. ReBNet: Residual binarized neural network. In IEEE International Symposium on Field-Programmable Custom Computing Machines. IEEE, 5764.Google ScholarGoogle Scholar
  8. [8] Hammoudi Karim, Cabani Adnane, Benhabiles Halim, and Melkemi Mahmoud. 2020. Validating the correct wearing of protection mask by taking a selfie: Design of a mobile application “CheckYourMask” to limit the spread of COVID-19. HAL preprint hal-02614790 (2020).Google ScholarGoogle Scholar
  9. [9] Han Song, Pool Jeff, Tran John, and Dally William J.. 2015. Learning both weights and connections for efficient neural network. In Conference on Neural Information Processing Systems, Vol. 28. IEEE.Google ScholarGoogle Scholar
  10. [10] Krizhevsky Alex. 2009. Learning Multiple Layers of Features from Tiny Images. Master’s thesis. University of Toronto.Google ScholarGoogle Scholar
  11. [11] Krizhevsky Alex, Sutskever Ilya, and Hinton Geoffrey E.. 2012. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 8490.Google ScholarGoogle Scholar
  12. [12] Kulkarni Amey, Vishwanath Amulya, and Shah Chintan. 2020. Implementing a real-time, AI-based, face mask detector application for COVID-19. NVIDIA Developer Blog 13 (2020).Google ScholarGoogle Scholar
  13. [13] LeCun Yann, Bottou Léon, Bengio Yoshua, and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998).Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Lee Namhoon, Ajanthan Thalaiyasingam, and Torr Philip. 2019. SNIP: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  15. [15] Liang Shuang, Yin Shouyi, Liu Leibo, Luk Wayne, and Wei Shaojun. 2018. FP-BNN: Binarized neural network on FPGA. Neurocomputing 275, C (2018).Google ScholarGoogle Scholar
  16. [16] Liu Hanxiao, Simonyan Karen, and Yang Yiming. 2018. DARTS: Differentiable architecture search. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  17. [17] Liu Zechun, Wu Baoyuan, Luo Wenhan, Yang Xin, Liu Wei, and Cheng Kwang-Ting. 2018. Bi-Real Net: Enhancing the performance of 1-bit CNNs with improved representational capability and advanced training algorithm. In European Conference on Computer Vision, Springer, 722737.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Mei Jieru, Li Yingwei, Lian Xiaochen, Jin Xiaojie, Yang Linjie, Yuille Alan, and Yang Jianchao. 2019. AtomNAS: Fine-grained end-to-end neural architecture search. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  19. [19] Molchanov Pavlo, Tyree Stephen, Karras Tero, Aila Timo, and Kautz Jan. 2017. Pruning convolutional neural networks for resource efficient inference. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  20. [20] Nazemi Mahdi, Pasandi Ghasem, and Pedram Massoud. 2018. NullaNet: Training deep neural networks for reduced-memory-access inference. arXiv preprint arXiv:1807.08716 (2018).Google ScholarGoogle Scholar
  21. [21] Netzer Yuval, Wang Tao, Coates Adam, Bissacco Alessandro, Wu Bo, and Ng Andrew Y.. 2011. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning.Google ScholarGoogle Scholar
  22. [22] World Health Organization. 2020. Archived: WHO Timeline - COVID-19. https://www.who.int/news/item/27-04-2020-who-timeline---covid-19.Google ScholarGoogle Scholar
  23. [23] Real Esteban, Aggarwal Alok, Huang Yanping, and Le Quoc V.. 2019. Regularized evolution for image classifier architecture search. In AAAI Conference on Artificial Intelligence, Vol. 33, 47804789.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Ren Pengzhen, Xiao Yun, Chang Xiaojun, Huang Po-Yao, Li Zhihui, Chen Xiaojiang, and Wang Xin. 2021. A comprehensive survey of neural architecture search: Challenges and solutions. Computing Surveys 54, 4 (2021).Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] See Abigail, Luong Minh-Thang, and Manning Christopher D.. 2016. Compression of neural machine translation models via pruning. In SIGNLL Conference on Computational Natural Language Learning. 291301.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Sze Vivienne, Chen Yu-Hsin, Yang Tien-Ju, and Emer Joel S.. 2017. Efficient processing of deep neural networks: a tutorial and survey. Proceedings of the IEEE 105, 12 (2017).Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Umuroglu Yaman, Akhauri Yash, Fraser Nicholas J., and Blott Michaela. 2020. LogicNets: Co-designed neural networks and circuits for extreme-throughput applications. In International Conference on Field-Programmable Logic and Applications. IEEE, 291297.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Umuroglu Yaman, Fraser Nicholas J., Gambardella Giulio, Blott Michaela, Leong Philip H. W., Jahre Magnus, and Vissers Kees. 2017. FINN: A framework for fast, scalable binarized neural network inference. In ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 6574.Google ScholarGoogle Scholar
  29. [29] Wang Erwei, Davis James J., Cheung Peter Y. K., and Constantinides George A.. 2019. LUTNet: Rethinking inference in FPGA soft logic. In IEEE International Symposium on Field-Programmable Custom Computing Machines. IEEE, 2634.Google ScholarGoogle Scholar
  30. [30] Wang Erwei, Davis James J., Cheung Peter Y. K., and Constantinides George A.. 2020. LUTNet: Learning FPGA configurations for highly efficient neural network inference. IEEE Transactions on Computers 69, 12 (2020).Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Wang Erwei, Davis James J., Stavrou Georgios-Ilias, Cheung Peter Y. K., Constantinides George A., and Abdelfattah Mohamed. 2022. Logic shrinkage: Learned FPGA netlist sparsity for efficient neural network inference. In ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 101111.Google ScholarGoogle Scholar
  32. [32] Wang Erwei, Davis James J., Zhao Ruizhe, Ng Ho-Cheung, Niu Xinyu, Luk Wayne, Cheung Peter Y. K., and Constantinides George A.. 2019. Deep neural network approximation for custom hardware: Where we’ve been, where we’re going. Computing Surveys 52, 2 (2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Wang Zhongyuan, Wang Guangcheng, Huang Baojin, Xiong Zhangyang, Hong Qi, Wu Hao, Yi Peng, Jiang Kui, Wang Nanxi, Pei Yingjiao, et al. 2020. Masked face recognition dataset and application. arXiv preprint arXiv:2003.09093 (2020).Google ScholarGoogle Scholar
  34. [34] Wang Zekun, Wang Pengwei, Louis Peter C., Wheless Lee E., and Huo Yuankai. 2021. Wearmask: Fast in-browser face mask detection with serverless edge computing for COVID-19. arXiv preprint arXiv:2101.00784 (2021).Google ScholarGoogle Scholar
  35. [35] Zoph Barret, Vasudevan Vijay, Shlens Jonathon, and Le Quoc V.. 2018. Learning transferable architectures for scalable image recognition. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 86978710.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Logic Shrinkage: Learned Connectivity Sparsification for LUT-Based Neural Networks

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Reconfigurable Technology and Systems
          ACM Transactions on Reconfigurable Technology and Systems  Volume 16, Issue 4
          December 2023
          343 pages
          ISSN:1936-7406
          EISSN:1936-7414
          DOI:10.1145/3615981
          • Editor:
          • Deming Chen
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 September 2023
          • Online AM: 10 February 2023
          • Accepted: 26 January 2023
          • Revised: 23 December 2022
          • Received: 25 September 2022
          Published in trets Volume 16, Issue 4

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text