Skip to main content
Log in

Periodicity and stability issues of a chaotic pattern recognition neural network

Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Traditional pattern recognition (PR) systems work with the model that the object to be recognized is characterized by a set of features, which are treated as the inputs. In this paper, we propose a new model for PR, namely one that involves chaotic neural networks (CNNs). To achieve this, we enhance the basic model proposed by Adachi (Neural Netw 10:83–98, 1997), referred to as Adachi’s Neural Network (AdNN), which though dynamic, is not chaotic. We demonstrate that by decreasing the multiplicity of the eigenvalues of the AdNN’s control system, we can effectively drive the system into chaos. We prove this result here by eigenvalue computations and the evaluation of the Lyapunov exponent. With this premise, we then show that such a Modified AdNN (M-AdNN) has the desirable property that it recognizes various input patterns. The way that this PR is achieved is by the system essentially sympathetically “resonating” with a finite periodicity whenever these samples (or their reasonable resemblances) are presented. In this paper, we analyze the M-AdNN for its periodicity, stability and the length of the transient phase of the retrieval process. The M-AdNN has been tested for Adachi’s dataset and for a real-life PR problem involving numerals. We believe that this research also opens a host of new research avenues.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Unfortunately, if the external excitation forces the brain out of chaos completely, it can lead to an epileptic seizure, and a future goal of this research is to see how these episodes can be anticipated, remedied and/or prevented. Some initial results of how this can be achieved are currently available.

  2. They did this in an input-specific manner as follows. Let suppose that x s i is the value of ith feature (i.e., the pixel, in the domain example) of the sth pattern. Rather than feeding x s i directly to the network, they added a bias of ‘2’ if the pixel value was ‘0’, and a bias of ‘8’ if the pixel value was ‘1’, and fed the resulting value to be the input a i to the network. The intention was to artificially create a greater (scaled) disparity between the values of ‘0’ and ‘1’. Adachi et al. also tried to explain the significance of these biases.

  3. It is well know that the Adachi model (see in Kawakami (ed) Bifurcation phenomena in nonlinear systems and theory of dynamical systems. World Scientific, Singapore, pp. 143–161, 1990.) has a biological basis. In this paper, the authors showed that the output of each neuron has a relation with its own history, through the information in the states. But in our model, the output of each neuron has a relation with the historical status of a special neuron, and the historical status with its own “past” is achieved by an interaction between the states. Such a modification leads to the possibility of switching from chaos to periodicity although the biological rationale is not yet fully understood. However, its role in controlling epilepsy seems to have been resolved [3].

  4. It is also possible to relate η(t + 1) and ζ(t + 1) to η K (t + 1) and ζ K (t + 1) for any fixed K. However, we would like to highlight that the uniqueness of our model consists of “binding” these global state values to the specific values of any one state variable (as opposed to binding each ζ i (t + 1) to ζ i (t) itself, as the AdNN does). Theoretically, it seems to be clear that the specific value of K, which identifies this “binding” neuron, is of no significance. However, the question of whether it will give us any added PR capability remains open.

  5. The term “sympathetic resonance” is used for lack of a better one. Quite simply, all we require is that the trained pattern periodically surfaces as the output of the CNN.

  6. We submit here (as a footnote) a few introductory sentences regarding such an analysis. The analysis of a general nonlinear system has two steps. The first consists of computing the steady state points as determined by the testing pattern. The second consists of the analysis of the stability of each steady state point. The stability consideration generally involves approximating the dynamics of the “nonlinear system” (for example, by computing the Jacobian) in terms of the approximated linear system. This phenomenon is a consequence of the operating characteristics in a neighbourhood of the quiescent steady states (or attracting manifold), and the decomposition of the system using a Taylor series, thus neglecting the higher-order terms.

  7. Observe that J 3 ij (t) has the value zero; this result is obtained as follows: \({\frac{\partial \xi_i(t+1)}{\partial \eta_j(t)} = \frac{\partial (k_r \xi_i(t)-\alpha x_i(t)+a_i)}{\partial \eta_j(t)}=0.}\)

  8. It turns out that the AdNN is actually not operating as a CNN, and this further strengthens the claim presented here. On the other hand, it slowly converges towards the stable but periodic orbits. Adachi, actually, indirectly mentions it in his paper when he states that the “chaos” of his model is dependent on the parameters of the network. This can be seen in Fig. 8 and Table 2 in Ref. [1]. When k r  = 0.95, λmax was computed to be 0.00028, but this result is an approximation to the true value of λmax which turns out to be −0.051293.

  9. As opposed to the AdNN, the M-AdNN has an additive positive term in the computation of the Lyapunov exponents as: \({\lambda_{2N-1}=1/{2\ln{N}}+\ln(k_f)}\) and \({\lambda_{2N}=1/{2\ln{N}}+\ln(k_r)}\) granting us the flexibility of forcing the system to be chaotic. Notice that since all but two of the eigenvalues are zero, it implies a rapid convergence to a fixed value for all directions corresponding to the eigenvalues whose value is zero. For the other eigenvalues the convergence is chaotic.

  10. The periodicity of the M-AdNN has nothing to do with the identity of the classes. We believe that the brain possesses this same property (for otherwise, it would resonate differently for every single pattern that it is trained with, and the set of patterns which we can recognize is truly “infinitely large”). But we are currently investigating how we can set the parameters of the M-AdNN (k r and k f ) to be class-dependent.

  11. How the brain is able to record and recognize such a periodic behaviour amidst chaos is yet unexplained [5].

  12. The unique characteristic of such a PR system is that each trained pattern has a unique attractor. When the testing pattern is fed into the system, the system converges to the attractor that characterizes it best. The difficult question, for which we welcome suggestions, is one of knowing which attractor it falls on. As it stands now, we have used a simplistic “compare against all” strategy, but even here, we believe that hierarchical strategies that use of syntactic clustering could enhance the search. Besides, such a comparison needs to be invoked only if a periodicity is observed by, for example, a frequency domain analysis. This is an extremely interesting area of research which we are currently pursuing.

  13. It would have been good if the periodicity was uniquely linked to the pattern classes, but, unfortunately, this is not the case. Rather, the system possesses the characteristic that it switches from being chaotic to periodic whenever a noisy version of one of the trained patterns is received. So, it would be more appropriate to say that this switching phenomenon occurs with 100% accuracy. We have taken the liberty to refer to this as “100% PR” accuracy.

  14. We would like to emphasize that the pattern, from a PR perspective, is not a “shape”. Rather, it is a 100-dimensional vector. Every component of this vector can be modified. Thus, since the noise is bit-wise, the current PR exercise may not work if the “images” are subject to translation/rotation operations. To be able to also consider translations in an image-processing application, we propose to preprocess the patterns and thus extract the features which will serve as the above bit-wise array, which will be subsequently used to represent the latter vector.

  15. In the case of Pattern 3, the first periodic pattern was visible at time index 19. But the periodicity was observable only after time index 39.

  16. The periodicity (7,15) means that we encounter a “double cycle”. Thus, after that transient phase, the training pattern occurs at times 7, 22, 29, 44, etc. This is actually because we have a eight-shaped limit cycle with the smaller loop of the ‘8’ having a periodicity of 7, and the larger loop having a periodicity of ‘15’.

  17. The training, which is a “one-shot” assignment, initializes each w ij to be zero and than sums this, for all the training patterns, {X S = [x s1 ... x s N ]}, to be w ij w ij  + x s i x s j .

References

  1. Adachi M, Aihara K (1997) Associative dynamics in a chaotic neural network. Neural Netw 10:83–98

    Article  Google Scholar 

  2. Albers DJ, Sprott JC, Dechert WD (1998) Routes to chaos in neural networks with random weights. Int J Bifurcat Chaos 8:1463–1478

    Article  MATH  Google Scholar 

  3. Calitoiu D, Oommen BJ, Nussbaum D (2005) Modeling inaccurate perception: desynchronization issues of a chaotic pattern recognition neural network. In: Proceedings of the 14th Scandinavian conference in image analysis. Joensuu, Finland, June 19–22, 2005, pp 821–830

  4. Fausett L (1994) Fundamentals of Neural Networks. Prentice Hall, Englewood Cliffs

  5. Freeman WJ (1992) Tutorial in neurobiology: from single neurons to brain chaos. Int J Bifurcat Chaos 2:451–482

    Article  MATH  Google Scholar 

  6. Friedman M, Kandel A (1999) Introduction to pattern recognition, statistical, structural, neural and fuzzy logic approaches. World Scientific, Singapore

  7. Fukunaga K (1990) Introduction to statistical pattern recognition. Academic, New York

  8. Kohonen T (1997) Self-organizing maps. Springer, Berlin

    MATH  Google Scholar 

  9. Makishima M, Shimizu T (1998) Wandering motion and co-operative phenomena in a chaotic neural network. Int J Bifurcat Chaos 8:891–898

    Article  MATH  Google Scholar 

  10. Ripley B (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge

  11. Rosenstein MT, Collins JJ, De Luca CJ (1993) A practical method for calculating largest lyapunov exponents from small data sets. Physica D 65:117–134

    Article  MATH  Google Scholar 

  12. Schurmann J (1996) Pattern classification, a unified view of statistical and neural approaches. Wiley, New York

    Google Scholar 

  13. Shuai JW, Chen ZX, Liu RT, Wu BX (1997) Maximum hyperchaos in chaotic nonmonotonic neuronal networks. Phys Rev E 56:890–893

    Article  Google Scholar 

  14. Sinha S (1996) Controlled transition from chaos to periodic oscillations in a neural network model. Physica A 224:433–446

    Article  Google Scholar 

  15. Theodoridis S, Koutroumbas K (1999) Pattern recognition. Academic, New York

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dragos Calitoiu.

Additional information

Research partially supported by the Natural Sciences and Engineering Research Council of Canada.

Appendix

Appendix

The formal procedure Footnote 17 for the PR system follows below. As mentioned in the paper, we remark that the periodicity can be observed using a frequency domain analysis. However, for the sake of simulation (to prove that the concepts are valid) in this pseudo-code we assume that we are dealing with serial machines, and that the periodicity is marked by a “compare against all” strategy. We are open to any superior schemes to achieve this task.

figure a
figure b

Rights and permissions

Reprints and permissions

About this article

Cite this article

Calitoiu, D., Oommen, B.J. & Nussbaum, D. Periodicity and stability issues of a chaotic pattern recognition neural network. Pattern Anal Applic 10, 175–188 (2007). https://doi.org/10.1007/s10044-007-0060-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-007-0060-3

Keywords

Navigation