Abstract
A new architecture for optical implementation of large-scale neural networks is proposed. This architecture is based on a time-division-multiplexing technique, in which both the neuron state vector and the interconnection matrix are divided in the time domain. Computer simulation and experimental results for associative memories show the effectiveness in implementing large-scale networks.
© 1990 Optical Society of America
Full Article | PDF ArticleMore Like This
J. Ohta, M. Takahashi, Y. Nitta, S. Tai, K. Mitsunaga, and K. Kyuma
Opt. Lett. 14(16) 844-846 (1989)
Masaya Oita, Masanobu Takahashi, Shuichi Tai, and Kazuo Kyuma
Opt. Lett. 15(21) 1227-1229 (1990)
Shengquan Gao, Jianwen Yang, Zhanbing Feng, and Yanxin Zhang
Appl. Opt. 36(20) 4779-4783 (1997)