Comparison of neofuzzy and rough neural networks

https://doi.org/10.1016/S0020-0255(97)10045-7Get rights and content

Abstract

Conventional neural network architectures generally lack semantics. Both rough and neofuzzy neurons introduce semantic structures in the conventional neural network models. Rough neurons make it possible to process data points with a range of values instead of a single precise value. Neofuzzy neurons make it possible to convert crisp values into fuzzy values. This paper compares rough and neofuzzy neural networks. Rough and neofuzzy neurons are demonstrated to be complementary to each other. It is shown that the introduction of rough and fuzzy semantic structures in neural networks can increase the accuracy of predictions.

References (11)

  • Z. Pawlak

    Rough classification

    International Journal of Man-Machine Studies

    (1984)
  • A.F. da Rocha et al.

    Neural nets and fuzzy logic

  • R.R. Yager

    On a semantic for neural networks based on linguistic quantifiers

  • R. Hecht-Nielsen, Neurocomputing, Addison-Wesley, Don Mills,...
  • W. Nunes et al.

    Recurrent learning of neofuzzy neural models

There are more references available in the full text version of this article.

Cited by (0)

View full text