Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays☆
Introduction
In the past two decades, since its initiation in [1], the well-known Hopfield neural network has been extensively studied, and successfully applied in many areas such as combinatorial optimization, signal processing and pattern recognition, see, e.g., [1], [2], [3], [4], [5]. In particular, the stability problem of Hopfield neural networks has received much research attention since, when applied, the neural network is sometimes assumed to have only one equilibrium that is globally stable. On the other hand, axonal signal transmission delays often occur in various neural networks, and may cause undesirable dynamic network behaviors such as oscillation and instability. Therefore, there has been a growing research interest on the stability analysis problems for delayed neural networks, and a large amount of literature has been available. Sufficient conditions, either delay-dependent or delay-independent, have been proposed to guarantee the asymptotic or exponential stability for neural networks, see [6], [7], [8], [9] for some recent results. It is noticed that, so far, most works on delayed neural networks have dealt with the stability analysis problems for neural networks with discrete time-delays.
Traditionally, discrete time-delays in the models of delayed feedback systems serve as a good approximation in simple circuits having a small number of cells. Nevertheless, a neural network usually has a special nature due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths. Such an inherent nature can be suitably modeled by distributed delays [10], [11], because the signal propagation is distributed during a certain time period. For example, in [11], a neural circuit has been designed with distributed delays, which solves a general problem of recognizing patterns in a time-dependent signal. Hence, both discrete and distributed delays should be taken into account when modeling neural networks [12]. Recently, there have been some initial studies on the stability analysis issue for various neural networks with distributed time-delays, see [13], [14], [15]. In [14], criteria ensuring the existence, uniqueness, and global asymptotic stability have been derived for Hopfield neural networks involving distributed delays. It should be mentioned that, most recently, the global asymptotic stability analysis problem has been investigated in [16] for a general class of neural networks with both discrete and distributed time-delays, where a linear matrix inequality (LMI) approach has been developed to establish the sufficient stability conditions.
In recent years, the stability analysis issues for neural networks in the presence of parameter uncertainties and/or stochastic perturbations have stirred some initial research attention. The reason is twofold: (1) the connection weights of the neurons depend on certain resistance and capacitance values that include uncertainties (modeling errors), and (2) in real nervous systems the synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters, and other probabilistic causes. Therefore, the stability properties have been investigated for delayed neural networks with parameter uncertainties (see, e.g., [17], [18]) or external stochastic perturbations (see, e.g., [19], [20]). However, to the best of the authors' knowledge, the robust stability analysis problem for stochastic Hopfield neural networks with discrete and distributed delays has not been properly addressed, which still remains important and challenging.
In this Letter, we deal with the global robust stability analysis problem for a class of stochastic Hopfield neural networks with discrete and distributed time-delays. By utilizing a Lyapunov–Krasovskii functional and using the well-known S-procedure, we recast the addressed stability analysis problem into a convex optimization problem. Different from the commonly used matrix norm theories (such as the M-matrix method), a unified linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the neural networks to be robustly, globally, asymptotically stable. Note that LMIs can be easily solved by using the Matlab LMI toolbox, and no tuning of parameters is required [21]. Two numerical examples are provided to show the usefulness of the proposed global stability condition.
Notations. The notations are quite standard. Throughout this Letter, and denote, respectively, the n-dimensional Euclidean space and the set of all real matrices. The superscript “T” denotes matrix transposition and the notation (respectively, ) where X and Y are symmetric matrices, means that is positive semidefinite (respectively, positive definite). is the identity matrix. is the Euclidean norm in . If A is a matrix, denote by its operator norm, i.e., where (respectively, ) means the largest (respectively, smallest) eigenvalue of A. is the space of square integrable vector. Moreover, let be a complete probability space with a filtration satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). Denote by the family of all -measurable -valued random variables such that where stands for the mathematical expectation operator with respect to the given probability measure P. The shorthand denotes a block diagonal matrix with diagonal blocks being the matrices . Sometimes, the arguments of a function or a matrix will be omitted in the analysis when no confusion can arise.
Section snippets
Problem formulation
Recently, Hopfield neural networks with time delays, either discrete or distributed, have been widely investigated, and many stability criteria have been established, see, e.g., [7], [8], [13], [14], [16], [17] for some recent results. As in [16], the Hopfield neural network with both discrete and distributed delays can be described by the following model: where is the state vector associated with the n
Main results and proofs
We first give the following lemmas that are useful in deriving our LMI-based stability criteria.
Lemma 1 Let , and . Then we have .
Proof The proof follows from the inequality immediately. □
Lemma 2 Let , H and E be real matrices of appropriate dimensions, with F satisfying (8), then if and only if there exists a positive scalar such that or equivalently(S-procedure) [25]
Lemma 3 Given constant matrices , , [21]
Numerical examples
Two simple examples are presented here in order to illustrate the usefulness of our main results. Our aim is to examine the global asymptotic stability of a given delayed stochastic neural network.
Example 1 In this example, we consider a two-neuron stochastic neural network (35) with both discrete and distributed delays but without parameter uncertainties, where
Conclusions
In this Letter, we have dealt with the problem of global asymptotic stability analysis for a class of uncertain stochastic delayed neural networks, which involve both discrete and distributed time delays. We have removed the traditional monotonicity and smoothness assumptions on the activation function. A linear matrix inequality (LMI) approach has been developed to solve the problem addressed. The stability criteria have been derived in terms of the positive definite solution to an LMI
References (29)
- et al.
Neurocomputing
(2002) - et al.
Phys. Lett. A
(2005) Neural Networks
(2000)- et al.
Physica D
(2004) - et al.
Appl. Math. Comput.
(2004) Neural Networks
(2004)Appl. Math. Comput.
(2004)- et al.
Phys. Lett. A
(2005) Chaos Solitons Fractals
(2005)- et al.
Phys. Lett. A
(2005)
J. Franklin Inst.
Systems Control Lett.
Automatica
Proc. Natl. Acad. Sci., USA
- ☆
This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of Germany.