ABSTRACT
A fundamental question lies in almost every application of deep neural networks: what is the optimal neural architecture given a specific data set? Recently, several Neural Architecture Search (NAS) frameworks have been developed that use reinforcement learning and evolutionary algorithm to search for the solution. However, most of them take a long time to find the optimal architecture due to the huge search space and the lengthy training process needed to evaluate each candidate. In addition, most of them aim at accuracy only and do not take into consideration the hardware that will be used to implement the architecture. This will potentially lead to excessive latencies beyond specifications, rendering the resulting architectures useless. To address both issues, in this paper we use Field Programmable Gate Arrays (FPGAs) as a vehicle to present a novel hardware-aware NAS framework, namely FNAS, which will provide an optimal neural architecture with latency guaranteed to meet the specification. In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher efficiency. Experimental results on common data set such as ImageNet show that in the cases where the state-of-the-art generates architectures with latencies 7.81× longer than the specification, those from FNAS can meet the specs with less than 1% accuracy loss. Moreover, FNAS also achieves up to 11.13× speedup for the search process. To the best of the authors' knowledge, this is the very first hardware aware NAS.
- Bowen Baker et al. 2016. Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167 (2016).Google Scholar
- Eric Chung et al. 2018. Serving DNNs in Real Time at Datacenter Scale with Project Brainwave. IEEE Micro 38, 2 (2018), 8--20.Google ScholarCross Ref
- Jeremy Fowers et al. 2018. A configurable cloud-scale DNN processor for real-time AI. In Proc. ofISCA. IEEE Press, 1--14. Google ScholarDigital Library
- Weiwen Jiang et al. 2018. Heterogeneous FPGA-based Cost-Optimal Design for Timing-Constrained CNNs. IEEE TCAD (2018).Google Scholar
- PYNQ. 2018. PYNQ: Python productivity for ZYNQ. http://www.pynq.io/ (2018).Google Scholar
- Esteban Real et al. 2017. Large-scale evolution of image classifiers. arXiv preprint arXiv:1703.01041 (2017). Google ScholarDigital Library
- J David Schaffer et al. 1992. Combinations of genetic algorithms and neural networks: A survey of the state of the art. In Proc. of COGANN-92. IEEE, 1--37.Google Scholar
- Yongming Shen et al. 2017. Maximizing CNN Accelerator Efficiency Through Resource Partitioning. In Proc. of ISCA. 535--547. Google ScholarDigital Library
- Xuechao Wei et al. 2018. TGPA: tile-grained pipeline architecture for low latency CNN inference. In Proc. ICCAD. ACM, 58. Google ScholarDigital Library
- Lingxi Xie et al. 2017. Genetic CNN.. In Proc. of ICCV. 1388--1397.Google Scholar
- Xiaowei Xu et al. 2018. Resource constrained cellular neural networks for real-time obstacle detection using FPGAs. In Proc. of ISQED. IEEE, 437--440.Google Scholar
- Lei Yang et al. 2018. Optimal Application Mapping and Scheduling for Network-on-Chips with Computation in STT-RAM based Router. IEEE TC (2018).Google Scholar
- Chen Zhang et al. 2015. Optimizing fpga-based accelerator design for deep convolutional neural networks. In Proc. of FPGA. ACM, 161--170. Google ScholarDigital Library
- Chen Zhang et al. 2016. Energy-Efficient CNN Implementation on a Deeply Pipelined FPGA Cluster. In Proc. of ISLPED. 326--331. Google ScholarDigital Library
- Xiaofan Zhang et al. 2018. DNNBuilder: an automated tool for building high-performance DNN hardware accelerators for FPGAs. In Proc. ICCAD. ACM, 56. Google ScholarDigital Library
- Barret Zoph et al. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016).Google Scholar
- Barret Zoph et al. 2017. Learning transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.07012 2, 6 (2017).Google Scholar
Recommendations
Depth-first vs. best-first search: new results
AAAI'93: Proceedings of the eleventh national conference on Artificial intelligenceBest-first search (BFS) expands the fewest nodes among all admissible algorithms using the same cost function, but typically requires exponential space. Depth-first search needs space only linear in the maximum search depth, but expands more nodes than ...
G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency
2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)Graph Neural Networks (GNNs) have emerged as the state-of-the-art (SOTA) method for graph-based learning tasks. However, it still remains prohibitively challenging to inference GNNs over large graph datasets, limiting their application to large-scale real-...
LSH vs Randomized Partition Trees: Which One to Use for Nearest Neighbor Search?
ICMLA '14: Proceedings of the 2014 13th International Conference on Machine Learning and ApplicationsRecently, randomized partition trees have been theoretically shown to be very effective in performing high dimensional nearest neighbor search. In this paper, we introduce a variant of randomized partition trees for high dimensional nearest neighbor ...
Comments