Journal of Robotics, Networking and Artificial Life
Online ISSN : 2352-6386
Print ISSN : 2405-9021
INT8 Activation Ternary or Binary Weights Networks: Unifying Between INT8 and Lower-bit Width Quantization
Ninnart FuengfusinHakaru Tamukoh
Author information
JOURNAL OPEN ACCESS

2022 Volume 9 Issue 2 Pages 171-176

Details
Abstract

This paper proposes ternary or binary weights with 8-bit integer activation convolutional neural networks. Our proposed model serves as the middle ground between 8-bit integer and lower than 8-bit precision quantized models. Our empirical experiments established that the conventional 1-bit or 2-bit only-weight quantization methods (i.e., BinaryConnect and ternary weights network) can be used jointly with the 8-bit integer activation quantization. We evaluate our model with the VGG16-like model to operate with the CIFAR10 and CIFAR100 datasets. Our models showcompetitive results to the general 32-bit floating point model.

Content from these authors
© 2022 ALife Robotics Corporation Ltd.

この記事はクリエイティブ・コモンズ [表示 - 非営利 4.0 国際]ライセンスの下に提供されています。
https://creativecommons.org/licenses/by-nc/4.0/deed.ja
Previous article Next article
feedback
Top