Abstract
Field-programmable gate array (FPGA)–specific deep neural network (DNN) architectures using native lookup tables (LUTs) as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy trade-offs. The first work in this area, LUTNet, exhibited state-of-the-art performance for standard DNN benchmarks. In this article, we propose the learned optimization of such LUT-based topologies, resulting in higher-efficiency designs than via the direct use of off-the-shelf, hand-designed networks. Existing implementations of this class of architecture require the manual specification of the number of inputs per LUT, K. Choosing appropriate K a priori is challenging. Doing so at even high granularity, for example, per layer, is a time-consuming and error-prone process that leaves FPGAs’ spatial flexibility underexploited. Furthermore, prior works see LUT inputs connected randomly, which does not guarantee a good choice of network topology. To address these issues, we propose logic shrinkage, a fine-grained netlist pruning methodology enabling K to be automatically learned for every LUT in a neural network targeted for FPGA inference. By removing LUT inputs determined to be of low importance, our method increases the efficiency of the resultant accelerators. Our GPU-friendly solution to LUT input removal is capable of processing large topologies during their training with negligible slowdown. With logic shrinkage, we improve the area and energy efficiency of the best-performing LUTNet implementation of the CNV network classifying CIFAR-10 by 1.54× and 1.31×, respectively, while matching its accuracy. This implementation also reaches 2.71× the area efficiency of an equally accurate, heavily pruned binary neural network (BNN). On ImageNet, with the Bi-Real Net architecture, employment of logic shrinkage results in a post-synthesis area reduction of 2.67× vs. LUTNet, allowing for implementation that was previously impossible on today’s largest FPGAs. We validate the benefits of logic shrinkage in the context of real application deployment by implementing a face mask detection DNN using a BNN, LUTNet, and logic-shrunk layers. Our results show that logic shrinkage results in area gains versus LUTNet (up to 1.20×) and equally pruned BNNs (up to 1.08×), along with accuracy improvements.
- [1] . 2020. Automatically Detecting Personal Protective Equipment on Persons in Images Using Amazon Rekognition. https://aws.amazon.com/cn/blogs/machine-learning/automaticallydetecting-personal-protective-equipment-on-persons-in-images-using-amazon-rekognition/.Google Scholar
- [2] . 2021. MaskedFace-Net–A dataset of correctly/incorrectly masked face images in the context of COVID-19. Smart Health 19 (2021), 100144.Google ScholarCross Ref
- [3] . 2018. Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Signal Processing Magazine 35, 1 (2018).Google ScholarCross Ref
- [4] . 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248–255.Google ScholarCross Ref
- [5] . 2021. BinaryCoP: Binary neural network-based COVID-19 face-mask wear and positioning predictor on edge devices. In IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW’21). IEEE, 108–115.Google Scholar
- [6] . 2017. Detecting masked faces in the wild with LLE-CNNs. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2682–2690.Google ScholarCross Ref
- [7] . 2018. ReBNet: Residual binarized neural network. In IEEE International Symposium on Field-Programmable Custom Computing Machines. IEEE, 57–64.Google Scholar
- [8] . 2020. Validating the correct wearing of protection mask by taking a selfie: Design of a mobile application “CheckYourMask” to limit the spread of COVID-19. HAL preprint hal-02614790 (2020).Google Scholar
- [9] . 2015. Learning both weights and connections for efficient neural network. In Conference on Neural Information Processing Systems, Vol. 28. IEEE.Google Scholar
- [10] . 2009. Learning Multiple Layers of Features from Tiny Images. Master’s thesis. University of Toronto.Google Scholar
- [11] . 2012. ImageNet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84–90.Google Scholar
- [12] . 2020. Implementing a real-time, AI-based, face mask detector application for COVID-19. NVIDIA Developer Blog 13 (2020).Google Scholar
- [13] . 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998).Google ScholarCross Ref
- [14] . 2019. SNIP: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations.Google Scholar
- [15] . 2018. FP-BNN: Binarized neural network on FPGA. Neurocomputing 275, C (2018).Google Scholar
- [16] . 2018. DARTS: Differentiable architecture search. In International Conference on Learning Representations.Google Scholar
- [17] . 2018. Bi-Real Net: Enhancing the performance of 1-bit CNNs with improved representational capability and advanced training algorithm. In European Conference on Computer Vision, Springer, 722–737.Google ScholarDigital Library
- [18] . 2019. AtomNAS: Fine-grained end-to-end neural architecture search. In International Conference on Learning Representations.Google Scholar
- [19] . 2017. Pruning convolutional neural networks for resource efficient inference. In International Conference on Learning Representations.Google Scholar
- [20] . 2018. NullaNet: Training deep neural networks for reduced-memory-access inference. arXiv preprint arXiv:1807.08716 (2018).Google Scholar
- [21] . 2011. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning.Google Scholar
- [22] World Health Organization. 2020. Archived: WHO Timeline - COVID-19. https://www.who.int/news/item/27-04-2020-who-timeline---covid-19.Google Scholar
- [23] . 2019. Regularized evolution for image classifier architecture search. In AAAI Conference on Artificial Intelligence, Vol. 33, 4780–4789.Google ScholarDigital Library
- [24] . 2021. A comprehensive survey of neural architecture search: Challenges and solutions. Computing Surveys 54, 4 (2021).Google ScholarDigital Library
- [25] . 2016. Compression of neural machine translation models via pruning. In SIGNLL Conference on Computational Natural Language Learning. 291–301.Google ScholarCross Ref
- [26] . 2017. Efficient processing of deep neural networks: a tutorial and survey. Proceedings of the IEEE 105, 12 (2017).Google ScholarCross Ref
- [27] . 2020. LogicNets: Co-designed neural networks and circuits for extreme-throughput applications. In International Conference on Field-Programmable Logic and Applications. IEEE, 291–297.Google ScholarCross Ref
- [28] . 2017. FINN: A framework for fast, scalable binarized neural network inference. In ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 65–74.Google Scholar
- [29] . 2019. LUTNet: Rethinking inference in FPGA soft logic. In IEEE International Symposium on Field-Programmable Custom Computing Machines. IEEE, 26–34.Google Scholar
- [30] . 2020. LUTNet: Learning FPGA configurations for highly efficient neural network inference. IEEE Transactions on Computers 69, 12 (2020).Google ScholarCross Ref
- [31] . 2022. Logic shrinkage: Learned FPGA netlist sparsity for efficient neural network inference. In ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 101–111.Google Scholar
- [32] . 2019. Deep neural network approximation for custom hardware: Where we’ve been, where we’re going. Computing Surveys 52, 2 (2019).Google ScholarDigital Library
- [33] . 2020. Masked face recognition dataset and application. arXiv preprint arXiv:2003.09093 (2020).Google Scholar
- [34] . 2021. Wearmask: Fast in-browser face mask detection with serverless edge computing for COVID-19. arXiv preprint arXiv:2101.00784 (2021).Google Scholar
- [35] . 2018. Learning transferable architectures for scalable image recognition. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 8697–8710.Google ScholarCross Ref
Index Terms
- Logic Shrinkage: Learned Connectivity Sparsification for LUT-Based Neural Networks
Recommendations
Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
FPGA '22: Proceedings of the 2022 ACM/SIGDA International Symposium on Field-Programmable Gate ArraysFPGA-specific DNN architectures using the native LUTs as independently trainable inference operators have been shown to achieve favorable area-accuracy and energy-accuracy tradeoffs. The first work in this area, LUTNet, exhibited state-of-the-art ...
FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations
FPGA '21: The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate ArraysBinary neural networks (BNNs) have 1-bit weights and activations. Such networks are well suited for FPGAs, as their dominant computations are bitwise arithmetic and the memory requirement is also significantly reduced. However, compared to start-of-the-...
Boolean matching for LUT-based logic blocks with applications to architecture evaluation and technology mapping
In this paper, we present new Boolean matching methods for lookup table (LUT)-based programmable logic blocks (PLBs) and their applications to PLB architecture evaluations and field programmable gate array (FPGA) technology mapping. Our Boolean matching ...
Comments