Elsevier

Physics Letters A

Volume 354, Issue 4, 5 June 2006, Pages 288-297
Physics Letters A

Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays

https://doi.org/10.1016/j.physleta.2006.01.061Get rights and content

Abstract

This Letter is concerned with the global asymptotic stability analysis problem for a class of uncertain stochastic Hopfield neural networks with discrete and distributed time-delays. By utilizing a Lyapunov–Krasovskii functional, using the well-known S-procedure and conducting stochastic analysis, we show that the addressed neural networks are robustly, globally, asymptotically stable if a convex optimization problem is feasible. Then, the stability criteria are derived in terms of linear matrix inequalities (LMIs), which can be effectively solved by some standard numerical packages. The main results are also extended to the multiple time-delay case. Two numerical examples are given to demonstrate the usefulness of the proposed global stability condition.

Introduction

In the past two decades, since its initiation in [1], the well-known Hopfield neural network has been extensively studied, and successfully applied in many areas such as combinatorial optimization, signal processing and pattern recognition, see, e.g., [1], [2], [3], [4], [5]. In particular, the stability problem of Hopfield neural networks has received much research attention since, when applied, the neural network is sometimes assumed to have only one equilibrium that is globally stable. On the other hand, axonal signal transmission delays often occur in various neural networks, and may cause undesirable dynamic network behaviors such as oscillation and instability. Therefore, there has been a growing research interest on the stability analysis problems for delayed neural networks, and a large amount of literature has been available. Sufficient conditions, either delay-dependent or delay-independent, have been proposed to guarantee the asymptotic or exponential stability for neural networks, see [6], [7], [8], [9] for some recent results. It is noticed that, so far, most works on delayed neural networks have dealt with the stability analysis problems for neural networks with discrete time-delays.

Traditionally, discrete time-delays in the models of delayed feedback systems serve as a good approximation in simple circuits having a small number of cells. Nevertheless, a neural network usually has a special nature due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths. Such an inherent nature can be suitably modeled by distributed delays [10], [11], because the signal propagation is distributed during a certain time period. For example, in [11], a neural circuit has been designed with distributed delays, which solves a general problem of recognizing patterns in a time-dependent signal. Hence, both discrete and distributed delays should be taken into account when modeling neural networks [12]. Recently, there have been some initial studies on the stability analysis issue for various neural networks with distributed time-delays, see [13], [14], [15]. In [14], criteria ensuring the existence, uniqueness, and global asymptotic stability have been derived for Hopfield neural networks involving distributed delays. It should be mentioned that, most recently, the global asymptotic stability analysis problem has been investigated in [16] for a general class of neural networks with both discrete and distributed time-delays, where a linear matrix inequality (LMI) approach has been developed to establish the sufficient stability conditions.

In recent years, the stability analysis issues for neural networks in the presence of parameter uncertainties and/or stochastic perturbations have stirred some initial research attention. The reason is twofold: (1) the connection weights of the neurons depend on certain resistance and capacitance values that include uncertainties (modeling errors), and (2) in real nervous systems the synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters, and other probabilistic causes. Therefore, the stability properties have been investigated for delayed neural networks with parameter uncertainties (see, e.g., [17], [18]) or external stochastic perturbations (see, e.g., [19], [20]). However, to the best of the authors' knowledge, the robust stability analysis problem for stochastic Hopfield neural networks with discrete and distributed delays has not been properly addressed, which still remains important and challenging.

In this Letter, we deal with the global robust stability analysis problem for a class of stochastic Hopfield neural networks with discrete and distributed time-delays. By utilizing a Lyapunov–Krasovskii functional and using the well-known S-procedure, we recast the addressed stability analysis problem into a convex optimization problem. Different from the commonly used matrix norm theories (such as the M-matrix method), a unified linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the neural networks to be robustly, globally, asymptotically stable. Note that LMIs can be easily solved by using the Matlab LMI toolbox, and no tuning of parameters is required [21]. Two numerical examples are provided to show the usefulness of the proposed global stability condition.

Notations. The notations are quite standard. Throughout this Letter, Rn and Rn×m denote, respectively, the n-dimensional Euclidean space and the set of all n×m real matrices. The superscript “T” denotes matrix transposition and the notation XY (respectively, X>Y) where X and Y are symmetric matrices, means that XY is positive semidefinite (respectively, positive definite). In is the n×n identity matrix. || is the Euclidean norm in Rn. If A is a matrix, denote by A its operator norm, i.e., A=sup{|Ax|:|x|=1}=λmax(ATA) where λmax() (respectively, λmin()) means the largest (respectively, smallest) eigenvalue of A. l2[0,) is the space of square integrable vector. Moreover, let (Ω,F,{Ft}t0,P) be a complete probability space with a filtration {Ft}t0 satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). Denote by LF0p([h,0];Rn) the family of all F0-measurable C([h,0];Rn)-valued random variables ξ={ξ(θ):hθ0} such that suphθ0E|ξ(θ)|p< where E{} stands for the mathematical expectation operator with respect to the given probability measure P. The shorthand diag{M1,M2,,MN} denotes a block diagonal matrix with diagonal blocks being the matrices M1,M2,,MN. Sometimes, the arguments of a function or a matrix will be omitted in the analysis when no confusion can arise.

Section snippets

Problem formulation

Recently, Hopfield neural networks with time delays, either discrete or distributed, have been widely investigated, and many stability criteria have been established, see, e.g., [7], [8], [13], [14], [16], [17] for some recent results. As in [16], the Hopfield neural network with both discrete and distributed delays can be described by the following model:u˙(t)=Au(t)+W0g0(u(t))+W1g1(u(th))+W2tτtg2(u(s))ds+V, where u(t)=[u1(t),u2(t),,un(t)]TRn is the state vector associated with the n

Main results and proofs

We first give the following lemmas that are useful in deriving our LMI-based stability criteria.

Lemma 1

Let xRn, yRn and ε>0. Then we have xTy+yTxεxTx+ε−1yTy.

Proof

The proof follows from the inequality (ε1/2xε1/2y)T(ε1/2xε1/2y)0 immediately.  □

Lemma 2

(S-procedure) [25]

Let M=MT, H and E be real matrices of appropriate dimensions, with F satisfying (8), thenM+HFE+ETFTHT<0, if and only if there exists a positive scalar ε>0 such thatM+1εHHT+εETE<0, or equivalently[MHεETHTεI0εE0εI]<0.

Lemma 3

[21]

Given constant matrices Ω1, Ω2, Ω3

Numerical examples

Two simple examples are presented here in order to illustrate the usefulness of our main results. Our aim is to examine the global asymptotic stability of a given delayed stochastic neural network.

Example 1

In this example, we consider a two-neuron stochastic neural network (35) with both discrete and distributed delays but without parameter uncertainties, whereA=[0.9000.9],W0=[1.21.51.71.2],W1=[1.10.50.50.8],W2=[0.60.10.10.2],G0=[0.1000.2],G1=[0.2000.2],G2=[0.1000.1],Σ1=[0.08000.08],Σ2=

Conclusions

In this Letter, we have dealt with the problem of global asymptotic stability analysis for a class of uncertain stochastic delayed neural networks, which involve both discrete and distributed time delays. We have removed the traditional monotonicity and smoothness assumptions on the activation function. A linear matrix inequality (LMI) approach has been developed to solve the problem addressed. The stability criteria have been derived in terms of the positive definite solution to an LMI

References (29)

  • G. Joya et al.

    Neurocomputing

    (2002)
  • J. Cao et al.

    Phys. Lett. A

    (2005)
  • M.P. Joy

    Neural Networks

    (2000)
  • S. Ruan et al.

    Physica D

    (2004)
  • J. Liang et al.

    Appl. Math. Comput.

    (2004)
  • H. Zhao

    Neural Networks

    (2004)
  • H. Zhao

    Appl. Math. Comput.

    (2004)
  • Z. Wang et al.

    Phys. Lett. A

    (2005)
  • S. Arik

    Chaos Solitons Fractals

    (2005)
  • S. Xu et al.

    Phys. Lett. A

    (2005)
  • S. Blythe et al.

    J. Franklin Inst.

    (2001)
  • L. Xie et al.

    Systems Control Lett.

    (1994)
  • Z. Wang et al.

    Automatica

    (2003)
  • J.J. Hopfield

    Proc. Natl. Acad. Sci., USA

    (1982)
  • Cited by (225)

    View all citing articles on Scopus

    This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of Germany.

    View full text