Abstract

The discrete-time delayed neural network with complex-valued linear threshold neurons is considered. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique and analysis method, several new delay-dependent criteria for checking the boundedness and global exponential stability are established. Illustrated examples are also given to show the effectiveness and less conservatism of the proposed criteria.

1. Introduction

In the past decade, neural networks have received increasing interest owing to their applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers [1]. In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks [2].

On the other hand, artificial neural networks are usually implemented by integrated circuits. In the implementation of artificial neural networks, time delay is produced by finite switching and finite propagation speed of electronic signals. During the implementation on very large-scale integrated chips, transmitting time delays will destroy the dynamical behaviors of neural networks. Hence it is a worthy work to consider the dynamical behaviors of neural networks with delays [3]. In recent years, some important results on the boundedness, convergence, global exponential stability, synchronization, state estimation, and passivity analysis have been reported for delayed neural networks; see [1ā€“9] and the references theorems for some recent publications.

It should be pointed out that all of the above-mentioned literatures on the dynamical behaviors of delayed neural networks are concerned with continuous-time case. However, when implementing the continuous-time delayed neural network for computer simulation, it becomes essential to formulate a discrete-time system that is an analogue of the continuous-time delayed neural network. To some extent, the discrete-time analogue inherits the dynamical characteristics of the continuous-time delayed neural network under mild or no restriction on the discretization step-size, and also remains some functional similarity [10]. Unfortunately, as pointed out in [11], the discretization cannot preserve the dynamics of the continuous-time counterpart even for a small sampling period, and therefore there is a crucial need to study the dynamics of discrete-time neural networks. Recently, the dynamics analysis problem for discrete-time delayed neural networks and discrete-time systems with delay has been extensively studied; see [10ā€“19] and references therein.

It is known that complex number calculus has been found useful in such areas as electrical engineering, informatics, control engineering, bioengineering, and other related fields. It is therefore not surprising to see that complex-valued neural networks which deal with complex-valued data, complex-valued weights and neuron activation functions have also been widely studied in recent years [20, 21]. Very recent, authors considered a class of discrete time recurrent neural networks with complex-valued weights and activation function [22, 23]. In [22], authors discussed the convergency for discrete-time recurrent neural networks with multivalued neurons, which have complex-valued weights and an activation function defined as a function of the argument of a weighted sum. In [23], the boundedness, global attractivity, and complete stability were investigated for discrete-time recurrent neural networks with complex-valued linear threshold neurons. However, the delay is not considered in [22, 23], and the given criteria for checking the boundedness, global attractivity, and complete stability are conservatism to some extent. Therefore, it is important and necessary to further improve the results reported in [23].

Motivated by the above discussions, the objective of this paper is to study the problem on boundedness and stability of discrete-time delayed neural network with complex-valued linear threshold neurons.

2. Model Description and Preliminaries

In this paper, we consider the following discrete-time complex-valued neural network with time-delay: Here is a nonnegative integer and is a vector defined as , where denotes the activity of the th neuron. Further, is a complex-valued function defined as and . In (2.1), , , and are stated as the input vector , the connection weight matrix and the delayed connection weight matrix , respectively, and denotes time-delay, which is a positive integer. The initial condition associated with model (2.1) is given by

Remark 2.1. When , model (2.1) turns into the following model [22, 23] Hence, the model in [22, 23] is a special cases of the model in this paper.

Definition 2.2. A vector is called an equilibrium point of neural network (2.1), if it satisfies

Definition 2.3. The neural network (2.1) is called to be bounded if each of its trajectories is bounded.

Definition 2.4. The equilibrium point of the model (2.1) with the initial condition (2.3) is said to be globally exponentially stable if there exist two positive constants and such that

Throughout this paper, for any constant , we denote , , , and . Now we give an assumption on connection weights.(H)For each , there exist positive real numbers satisfying For presentation convenience, in the following, we denote Let us define Let be the sequence defined by where .

To prove our results, the following lemma that can be found in [24] is necessary in this paper.

Lemma 2.5 (see [24]). Let be a nonzero number. Then is a solution of the homogeneous recurrence relation with constant coefficients if and only if is a root of the polynomial equation If the polynomial equation has distinct roots , then is the general solution of (2.11) in the following sense: no matter what initial values for are given, there are constants so that (2.13) is the unique sequence which satisfies both the recurrence relation (2.11) and the initial condition.

The polynomial equation (2.12) is called the characteristic equation of the recurrence relation (2.11) and its roots are the characteristic roots.

3. The Main Results and Their Proofs

Theorem 3.1. If the assumption (H) holds and , the network (2.1) is bounded.

Proof. Let . It is noted that the restriction is nonnegative and nondecreasing and the restriction if . Consequently, we have Moreover, it is easy to prove that
From (2.1), we get that for . Note that , , , , , , , , and . Hence, based on monotonicity of , we have Similarly, we have Hence, Thus, Now let be a sequence with From the definition of , we have the following two equations: for . It follows from (3.9) that That is for Then the characteristic equation of the recurrence relation of (3.11) is In the following, we will prove that (i) the roots of (3.12) are distinct and (ii) for each root of (3.12).
For (i), let . Then . It is clear that and are coprime since and . Hence has no multiple divisor, which means that (3.12) has distinct roots.
For (ii), assume to the contrary that there exists some such that . Then we have from (3.12) that Multiplying on both sides of inequality (3.13), we can get Thus, which is contrary to the assumption (H). Therefore, for any root of (3.12).
Let be the distinct roots of (3.12); then (). From Lemma 2.5, we get that for , where are constants which are uniquely determined by initial condition: .
From (3.8), we have It follows that for . From (), we know that , so Thus, series is bounded, that is, to say there exists a positive constant such that
By the definition of and inequality (3.7), we know that , for . It follows from the definition of that By the properties (3.1), (3.2) of function and the definition of , we know
On the other hand, we can get from (2.1) that for . It is noted that and , since , , , , and . Thus, we have where the last inequality is due to . Similarly, we can imply that From (3.21), (3.23), and (3.24), we know that the real part and imaginary part of are both bounded, so each trajectory of network (2.1) is bounded.

Theorem 3.2. If there exist three symmetric positive definite matrices , , and , two positive diagonal matrices and such that the matrix is a negative definite matrix, then network (2.1) is globally exponentially stable.

Proof. Let . For , define then is a Banach space with the topology of uniform convergence. For any , let and be the solutions of model (2.1) starting from and , respectively.
It follows from model (2.1) that Let and . Then (3.27) can be written as Now we consider the following Lyapunov functional candidate for system (3.28) as where Then Therefore, where
It is easy to prove that We define From (3.32) and (3.34), we have By the definition of , we can get the following two inequations: It is obvious that (3.37) is equivalent to and (3.38) is equivalent to where is the unit column vector having in th row and zeros elsewhere.
Let and , where for each . It follows from (3.36), (3.39) and (3.40) that Since is a negative definite matrix, we have
By the similar method in [16], we can prove that network (2.1) is globally exponentially stable.

Corollary 3.3. If there exist an positive diagonal matrix and three positive definite matrices , , and such that is negative definite, then network (2.4) is globally exponentially stable.

Proof. Similar to the proof of Theorem 3.2, let and . Then
Now we consider the following Lyapunov functional candidate for system (3.44) as where Then Let Then As the proof in Theorem 3.2, there exists a matrix with such that So By the similar method in [16], we can prove that network (2.4) is globally exponentially stable.

Remark 3.4. It is known that the obtained criteria for checking stability of discrete-time delayed neural networks depend on the mathematic technique and the constructed Lyapunov functionals or Lyapunov-Krasovskii functionals in varying degrees. Using elegant mathematic technique and constructing proper Lyapunov functionals or Lyapunov-Krasovskii functionals can reduce conservatism. So, establishing some less conservatism results will be in future works.

Remark 3.5. Recently, delay-fractioning approach is widely used to reduce conservatism; it is proved that it can reduce more conservatism than many previous methods due to the remaining of some useful terms [25]. In [25], the delay fractioning approach has been used to investigate the global synchronization of delayed-complex networks with stochastic disturbances, which has shown the potential of reducing conservatism. Using the delay-partitioning approach, we can also investigate the stability of discrete-time delayed neural networks; the corresponding results will appear in the near future.

4. Examples

Here, we present two examples to show the validity of our results.

Example 4.1. Consider a two-neuron neural network (2.1), where
Taking , we can calculate . From Theorem 3.1, we know that the considered network (2.1) is bounded.
Furthermore, when , the matrix in Theorem 3.2 is a negative definite matrix. From Theorem 3.2, we know that the considered network (2.1) is globally exponentially stable. In fact, we can verify that and are unique equilibrium points of and of the considered network (2.1), respectively. The global exponential stability of equilibrium points is further verified by the simulation given in Figures 1, 2, 3, and 4.

Example 4.2. Consider a two-neuron neural network (2.4), where Obviously, we cannot find a dialogue positive definite matrix such that is a Hermitian matrix, so it is impossible using the theorem in [23] to judge the stability of the considered network (2.4).
When , the matrix in Corollary 3.3 is a negative definite matrix. From Corollary 3.3, we know that the considered network (2.4) is globally exponentially stable.

5. Conclusion

In this paper, the discrete-time delayed neural network with complex-valued linear threshold neurons has been considered. Several new delay-dependent criteria for checking the boundedness and global exponential stability have been established by constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique and analysis method. The proposed results are less conservative than some recently known ones in the literature, which are demonstrated via two examples.

We would like to point out that it is possible to generalize our main results to more complex systems, such as neural networks with time-varying delays [1, 8], neural networks with parameter uncertainties [19], stochastic perturbations [17], Markovian jumping parameters [18], and some nonlinear systems [26ā€“32]. The corresponding results will appear in the near future.

Acknowledgment

The work is supported by National Natural Science Foundation of China under Grants no. 60974132 and 10772152.