Configurations of steady states for Hopfield-type neural networks

https://doi.org/10.1016/j.amc.2006.04.054Get rights and content

Abstract

The dependence of the steady states on the external input vector I for the continuous-time and discrete-time Hopfield-type neural networks of n neurons is discussed. Conditions for the existence of one or several paths of steady states are derived. It is shown that, in some conditions, for an external input I there may exist at least 2n exponentially stable steady states (called configuration of steady states), and their regions of attraction are estimated. This means that there exist 2n paths of exponentially stable steady states defined on a certain set of input values. Conditions assuring the transfer of a configuration of exponentially stable steady states to another configuration of exponentially stable steady states by successive changes of the external input are obtained. These results may be important for the design and maneuvering of Hopfield-type neural networks used to analyze associative memories.

Introduction

In this paper, we consider the continuous-time Hopfield-type neural network defined by the following system of nonlinear differential equations:xi˙=-aixi+j=1nTijgj(xj)+Ii,i=1,n¯.In [1] a semi-discretization technique has been presented for (1), leading to discrete-time neural networks which faithfully preserve the characteristics of (1), i.e. the steady states and their stability properties. Despite this fact, in this paper, we will consider a more general class of discrete time Hopfield-type neural networks, defined by the following discrete system:xp+1i=xpi-aixpi+j=1nTijgj(xpj)+Iii=1,n¯,pN.In Eqs. (1), (2) we have that ai > 0, Ii denote the external inputs, T = (Tij)n×n is the interconnection matrix, gi:RR(i=1,n¯) represent the neuron input–output activations. For simplicity, we will suppose that the activation functions gi are of class C1 on R and gi(0) = 0, for i=1,n¯.

The systems (1), (2) can be written in the matrix forms:x˙=-Ax+Tg(x)+I,xp+1=xp-Axp+Tg(xp)+I,where x=(x1,x2,,xn)TRn, A=diag(a1,,an)Mn×n, I=(I1,,In)TRn and g:RnRn is given by g(x) = (g1(x1), g2(x2),  , gn(xn))T.

Since neural networks like (1) have been first considered in [2], [3], they have received much attention because of their applicability in problems of optimization, signal processing, image processing, solving nonlinear algebraic equation, pattern recognition, associative memories and so on. The qualitative analysis of neural dynamics plays an important role in the design of practical neural networks.

To solve problems of optimization, neural control and signal processing, neural networks have to be designed in such way that, for a given external input, they exhibit only one globally asymptotically stable steady state. This matter has been treated in [4], [5], [6], [7], [8], [9], [1] and the references therein.

On the other hand, if neural networks are used to analyze associative memories, the existence of several locally asymptotically stable steady states is required, as they store information and constitute distributed and parallel neural memory networks. In this case, the purpose of the qualitative analysis is the study of the locally exponentially stable steady states (existence, number, regions of attraction) so as to ensure the recall capability of the models. Conditions for the local asymptotic stability of the steady states (and estimations of their regions of attraction) for Hopfield-type neural networks have been derived din [10], [11], [12], [13], [14], [15] using single Lyapunov functions and in [16], [17] using vector Lyapunov functions.

The aim of this paper is to show, that in some conditions, for some values of the external input I there exists a configuration of at least 2n asymptotically stable steady states for (1), (2). Therefore, there exist 2n paths of steady states defined on a set of values of the external input. Estimations of the regions of attraction of these steady states are given. Finally, we address the problem of controllability of these configurations of steady states.

Section snippets

Paths of steady states

A steady state x = (x1, x2,  , xn)T of (1), (2) corresponding to the external input I = (I1, I2,  , In)T is a solution of the equation:-Ax+Tg(x)+I=0.For a given external input vector IRn the system (5) may have one solution, several solutions or it may happen that it has no solutions. On the other hand, for any state xRn there exists a unique external input I(x)Rn such that x is a steady state of (1), (2) corresponding to the input I(x). Clearly, the external input function I:RnRn is of class C1 or Rn

Controllability

Definition 8

A change at a certain moment of the external input from I′ to I″ is called maneuver and it is denoted by I  I″.

We say that the maneuver I  I″ made at t = t0 is successful on the path ϕα : Hα  Δα of (1) if I′, I  Hα and if the solution of the initial value problemx˙=Ax+Tg(x)+Ix(t0)=ϕα(I)tends to ϕα(I″) as t  ∞.

We say that the maneuver I  I″ made at p = p0 is successful on the path ϕα : Hα  Δα of (2) if I′, I  Hα and if the solution of the initial value problemxp+1=Bxp+Tg(xp)+Ixp0=ϕα(I)tends to ϕα(I″) as p

Configurations of 2n steady states

In this section, we will consider the following hypothesis for the activation functions:

  • (H1)

    The activation functions gi,i=1,n¯ are bounded:|gi(s)|1for anysR,i=1,n¯.

  • (H2)

    There exists α  (0, 1) such that the functions gi,i=1,n¯ satisfy:gi(s)αifs1andgi(s)-αifs-1.

  • (H3)

    The derivatives of the activation functions gi,i=1,n¯ satisfy:|gi(s)|<aij=1n|Tji||s|1.

These hypothesis are not too restraining. Indeed, if the activation functions are bounded, but not by 1, one can consider the new activation functions gi

Conclusions

For continuous and discrete-time Hopfield-type neural networks, conditions ensuring the existence of 2n paths of exponentially stable steady states defined on a certain set of external input values have been derived. More, it has been shown that the system is controllable along each of these paths of steady states. Finding similar conditions for Cohen–Grossberg neural networks may constitute a direction for future research.

References (22)

  • J. Hopfield

    Neural networks and physical systems with emergent collective computational abilities

    Proceedings of the National Academy of Sciences

    (1982)
  • Cited by (8)

    • Multistability in impulsive hybrid Hopfield neural networks with distributed delays

      2011, Nonlinear Analysis: Real World Applications
      Citation Excerpt :

      On the other hand, if neural networks are used to analyze associative memories, the existence of several locally asymptotically stable steady states is required (i.e. multistability), as they store information and constitute distributed and parallel neural memory networks. Many research results on multistability of neural networks have been reported in [30–42]. Multistability analysis is essentially different from mono-stability analysis.

    • Impulsive hybrid discrete-time Hopfield neural networks with delays and multistability analysis

      2011, Neural Networks
      Citation Excerpt :

      On the other hand, if neural networks are used to analyze associative memories, the existence of several locally asymptotically stable steady states is required (i.e. multistability), as they store information and constitute distributed and parallel neural memory networks. Many research results on multistability of continuous-time neural networks have been reported in Cao, Feng, and Wang (2008), Cheng, Lin, and Shih (2006), Cheng, Lin, and Shih (2007), Huang and Cao (2008a), Huang and Cao (2008b), Kaslik and Balint (2006), Shih and Tseng (2008), Wang, Lu, and Chen (2009), Yi, Tan, and Lee (2003) and Zhang, Yi, Yu, and Heng (2009). Multistability analysis is essentially different from mono-stability analysis.

    • Multiple periodic solutions in impulsive hybrid neural networks with delays

      2011, Applied Mathematics and Computation
      Citation Excerpt :

      The impulsive hybrid control method [11–13] has been widely used to stabilize dynamical systems originating from diverse fields such as medicine, biology, economics, etc. Studying neural network dynamics not only involves discussions of global exponential stability [14–21] and multistability properties [22–25], but also requires exploring other behaviors such as periodic solutions, bifurcations and chaos. The existence of a unique globally exponentially stable periodic solution of impulsive neural networks has been recently investigated in [26–30].

    • Survey of dynamics of Cohen–Grossberg-type RNNs

      2016, Studies in Systems, Decision and Control
    • Qualitative analysis and control of complex neural networks with delays

      2015, Qualitative Analysis and Control of Complex Neural Networks with Delays
    View all citing articles on Scopus
    View full text