Multistability of competitive neural networks with time-varying and distributed delays

https://doi.org/10.1016/j.nonrwa.2007.11.014Get rights and content

Abstract

In this paper, with two classes of general activation functions, we investigate the multistability of competitive neural networks with time-varying and distributed delays. By formulating parameter conditions and using inequality technique, several novel delay-independent sufficient conditions ensuring the existence of 3N equilibria and exponential stability of 2N equilibria are derived. In addition, estimations of positively invariant sets and basins of attraction for these stable equilibria are obtained. Two examples are given to show the effectiveness of our theory.

Introduction

It is well known that neural networks play important roles in many applications, such as classification, associative memory, image processing, pattern recognition, parallel computation, optimization problem, and decision making etc [1], [2], [3], [4]. The theory on the dynamics of the networks has been developed according to the purposes of the applications. On the one hand, in the applications to parallel computation and optimization problem, the existence of a computable solution for all possible initial states is the best situation. Mathematically, this means that an equilibrium of the networks exists and any state in the neighborhood converges to the equilibrium, which is called “monostability” of networks. On the other hand, existence of many equilibria is a necessary feature in the applications of neural networks to associative memory storage, pattern recognition, and decision making. The notion of “multistability” of networks describes coexistence of multiple stable patterns such as equilibria or periodic orbits.

Competitive neural networks (CNNs) with different time scales are proposed in [5], which model the dynamics of cortical cognitive maps with unsupervised synaptic modifications. In this model, there are two types of state variables, that of the short-term memory (STM) describing the fast neural activity and that of the long-term memory (LTM) describing the slow unsupervised synaptic modifications. A typical form for multitime scale CNNs with time-varying and distributed delays is described by the following functional differential equations {STM:ϵdxi(t)dt=aixi(t)+j=1NDijfj(xj(t))+j=1NDijτfj(xj(tτij(t)))+j=1ND¯ijtKij(ts)fj(xj(s))ds+Bij=1Pmij(t)yj+Ii,LTM:dmij(t)dt=mij(t)+yjfi(xi(t)),i=1,2,,N,j=1,2,,P, where xi(t) is the neuron current activity level, ai>0 is the time constant of the neuron, fj(xj(t)) is the output of neurons, mij(t) is the synaptic efficiency, yj is the constant external stimulus, Dij represents the connection weight between the ith neuron and the jth neuron, Bi is the strength of the external stimulus, ϵ is the time scale of STM state, Dijτ and D¯ij represent the synaptic weight of delayed feedback, Ii is the constant input, τij(t) corresponds to the transmission delay and satisfies 0<τij(t)<τij (τij is a positive constant).

After setting Si(t)=j=1Pmij(t)yj=yTmi(t), where y=(y1,y2,,yP)T, mi(t)=(mi1(t),mi2(t),,miP(t))T, and summing up the LTM over j, the networks (1) can be rewritten in the following form {STM:ϵdxi(t)dt=aixi(t)+j=1NDijfj(xj(t))+j=1NDijτfj(xj(tτij(t)))+j=1ND¯ijtKij(ts)fj(xj(s))ds+BiSi(t)+Ii,LTM:dSi(t)dt=Si(t)+|y|2fi(xi(t)),i=1,2,,N, where |y|2=y12++yP2 is a constant, without loss of generality, the input stimulus y is assumed to be normalized with unit magnitude |y|2=1, and the fast time-scale parameter ϵ is also assumed to be unit, then the above networks are simplified as {STM:dxi(t)dt=aixi(t)+j=1NDijfj(xj(t))+j=1NDijτfj(xj(tτij(t)))+j=1ND¯ijtKij(ts)fj(xj(s))ds+BiSi(t)+Ii,LTM:dSi(t)dt=Si(t)+fi(xi(t)),i=1,2,,N, where the delay kernels Kij(s):[0,+)[0,+) are piecewise continuous integral functions, and they satisfy 0+Kij(s)ds=1,0+Kij(s)eμsds<+ for some positive constant μ and i,j=1,2,,N.

In the past few years, the monostability analysis of neural networks with time-varying and/or distributed delays has been developed [6], [7], [8], [9], [10], [11], [12], [13]. In particular, the theory of unique equilibrium point and global convergence to the equilibrium point for CNNs with their various generalizations has been extensively studied, see [5], [14], [15], [16], [17]. Recently, the multistability analysis of neural networks has attracted the attention of many researchers [18], [19], [20], [21]. In [18], with unsaturated piecewise linear activation function f(x)=max{0,x}, the multistability of system (3) without delay, that is Dijτ=D¯ij=0(i,j=1,2,,N), was investigated by using local inhibition and constructing an energy-like function. The result therein strongly relies on the piecewise linearity and unsaturation of activation functions. In [19], [20], the multistability of neural networks without and with constant delays was studied by formulating parameter conditions. Inspired by [19], [20], in this paper, we shall study the multistability of system (3) with two classes of general activation functions. Firstly, we derive conditions ensuring the existence of 3N equilibria for system (3) with two classes of general activation functions, through constructing parameter conditions by a geometrical observation. Secondly, we establish a series of new criteria on the exponential stability of 2N equilibria for the networks above by means of inequality technique. In addition, estimations of positively invariant sets and basins of attraction for these stable stationary solutions are derived.

This paper is organized as follows. In Section 2, we consider two classes of activation functions which are commonly employed in neural networks. We then obtain conditions for the existence of 3N equilibria. In Section 3, we show that, with additional conditions, there are 2N regions in R2N, which are positively invariant under the flow generated by system (3). Subsequently, it is argued that these 2N equilibria are exponentially stable. Two numerical simulations are given to illustrate our theory and distinct dynamical behaviors for different activation functions in Section 4. Finally, concluding remarks are summarized in Section 5.

Section snippets

Activation functions and multiple equilibria

The initial conditions associated with system (3) are of the form xi(t)=ϕi(t),t(,0],Si(t)=ψi(t)ψi(0),t(,0].

For convenience, we introduce two notations. For any u(t)=(u1(t),u2(t),,uN(t))TRN, define u(t)=[i=1N|ui(t)|p/q]q/p,pq>0. For any ϕ(s)=(ϕ1(s),ϕ2(s),,ϕN(s))TRN,s(,0], define ϕ=[sups(,0]i=1N|ϕi(s)|p/q]q/p,pq>0, where ϕi(s)(i=1,2,,N) are continuous and bounded functions on (,0].

To prove our results, the following two lemmas are necessary.

Lemma 1

[22]

Let a,b0,s1 , thenas1bs1

Stability of multiple equilibria

In this section, we shall give some positively invariant sets for system (3) and investigate stability of the equilibrium point in each invariant set. As a result, we also obtain a basin of attraction for each of the exponentially stable equilibria. Firstly, we give the third condition for system (3) with activation functions in class A and the second condition for system (3) with activation functions in class B:

  • (H3A)

    There exist constants pq>0,μi>0(i=1,2,,2N) such that μi[pai(pq)j=1N(|Dij|+|Dij

Two illustrative examples

We consider the following two-dimensional competitive neural networks with time-varying and distributed delays {dxi(t)dt=aixi(t)+j=12Dijfj(xj(t))+j=12Dijτfj(xj(tτij(t)))+D¯i15tKi1(ts)f1(x1(s))ds+D¯i26tKi2(ts)f2(x2(s))ds+BiSi(t)+Ii,i=1,2,dSi(t)dt=Si(t)+fi(xi(t)),i=1,2 for t>0, where Ki1(s)=2e2s1e10,Ki2(s)=es1e6(i=1,2). We can easily check that Kij(s)(i,j=1,2) above satisfy the assumptions (4).

Example 1

For system (34), take a1=1,a2=2,D11=D11τ=D¯11=B1=12,D22=D22τ=D¯22=B2=1,D12=D12τ=0.15,D21

Conclusions

In this paper, two classes of activation functions, which are commonly employed in neural networks, have been considered. By means of geometrical method and inequality technique, several novel sufficient conditions have been derived ensuring the existence of 3N equilibria and exponential stability of 2N equilibria for competitive neural networks with time-varying and distributed delays. Compared with [18], the method used here is valid for a class of functions and the stability analysis of

Acknowledgments

The authors appreciate the editor’s work and the reviewer’s insightful comments and constructive suggestions. This work was jointly supported by the National Natural Science Foundation of China under Grant 60574043 and the Natural Science Foundation of Jiangsu Province of China under Grant BK2006093.

References (22)

  • J. Hopfield

    Neurons with graded response have collective computational properties like those of two state neurons

    Proc. Natl. Acad. Sci. USA

    (1984)
  • Cited by (123)

    View all citing articles on Scopus
    View full text