Configurations of steady states for Hopfield-type neural networks
Introduction
In this paper, we consider the continuous-time Hopfield-type neural network defined by the following system of nonlinear differential equations:In [1] a semi-discretization technique has been presented for (1), leading to discrete-time neural networks which faithfully preserve the characteristics of (1), i.e. the steady states and their stability properties. Despite this fact, in this paper, we will consider a more general class of discrete time Hopfield-type neural networks, defined by the following discrete system:In Eqs. (1), (2) we have that ai > 0, Ii denote the external inputs, T = (Tij)n×n is the interconnection matrix, represent the neuron input–output activations. For simplicity, we will suppose that the activation functions gi are of class C1 on and gi(0) = 0, for .
The systems (1), (2) can be written in the matrix forms:where , , and is given by g(x) = (g1(x1), g2(x2), … , gn(xn))T.
Since neural networks like (1) have been first considered in [2], [3], they have received much attention because of their applicability in problems of optimization, signal processing, image processing, solving nonlinear algebraic equation, pattern recognition, associative memories and so on. The qualitative analysis of neural dynamics plays an important role in the design of practical neural networks.
To solve problems of optimization, neural control and signal processing, neural networks have to be designed in such way that, for a given external input, they exhibit only one globally asymptotically stable steady state. This matter has been treated in [4], [5], [6], [7], [8], [9], [1] and the references therein.
On the other hand, if neural networks are used to analyze associative memories, the existence of several locally asymptotically stable steady states is required, as they store information and constitute distributed and parallel neural memory networks. In this case, the purpose of the qualitative analysis is the study of the locally exponentially stable steady states (existence, number, regions of attraction) so as to ensure the recall capability of the models. Conditions for the local asymptotic stability of the steady states (and estimations of their regions of attraction) for Hopfield-type neural networks have been derived din [10], [11], [12], [13], [14], [15] using single Lyapunov functions and in [16], [17] using vector Lyapunov functions.
The aim of this paper is to show, that in some conditions, for some values of the external input I there exists a configuration of at least 2n asymptotically stable steady states for (1), (2). Therefore, there exist 2n paths of steady states defined on a set of values of the external input. Estimations of the regions of attraction of these steady states are given. Finally, we address the problem of controllability of these configurations of steady states.
Section snippets
Paths of steady states
A steady state x = (x1, x2, … , xn)T of (1), (2) corresponding to the external input I = (I1, I2, … , In)T is a solution of the equation:For a given external input vector the system (5) may have one solution, several solutions or it may happen that it has no solutions. On the other hand, for any state there exists a unique external input such that x is a steady state of (1), (2) corresponding to the input I(x). Clearly, the external input function is of class C1 or
Controllability
Definition 8 A change at a certain moment of the external input from I′ to I″ is called maneuver and it is denoted by I′ ↦ I″. We say that the maneuver I′ ↦ I″ made at t = t0 is successful on the path ϕα : Hα → Δα of (1) if I′, I″ ∈ Hα and if the solution of the initial value problemtends to ϕα(I″) as t → ∞. We say that the maneuver I′ ↦ I″ made at p = p0 is successful on the path ϕα : Hα → Δα of (2) if I′, I″ ∈ Hα and if the solution of the initial value problemtends to ϕα(I″) as p
Configurations of 2n steady states
In this section, we will consider the following hypothesis for the activation functions:
- (H1)
The activation functions are bounded:
- (H2)
There exists α ∈ (0, 1) such that the functions satisfy:
- (H3)
The derivatives of the activation functions satisfy:
These hypothesis are not too restraining. Indeed, if the activation functions are bounded, but not by 1, one can consider the new activation functions
Conclusions
For continuous and discrete-time Hopfield-type neural networks, conditions ensuring the existence of 2n paths of exponentially stable steady states defined on a certain set of external input values have been derived. More, it has been shown that the system is controllable along each of these paths of steady states. Finding similar conditions for Cohen–Grossberg neural networks may constitute a direction for future research.
References (22)
- et al.
Dynamics of a class of discrete-time neural networks and their continuous-time counterparts
Mathematics and Computers in Simulation
(2000) - et al.
Existence and stability of equilibria of continuous time Hopfield neural network
Journal of Computational and Applied Mathematics
(2004) An estimation on domain of attraction and convergence rate of Hopfield continuous feedback neural networks
Physics Letters A
(2004)- et al.
Estimation on domain of attraction and convergence rate of Hopfield continuous feedback neural networks
Journal of Computer and Systems Sciences
(2001) - et al.
Exponential stability of discrete-time Hopfield neural networks
Computers and Mathematics with Applications
(2004) - et al.
Stability and bifurcation analysis on a discrete-time system of two neurons
Applied Mathematical Letters
(2004) - et al.
Stability and bifurcation analysis on a discrete-time neural network
Journal of Computational and Applied Mathematics
(2005) - et al.
Control procedures using domains of attraction
Nonlinear Analysis: Theory, Methods and Applications
(2005) - et al.
Methods of determination and approximation of domains of attraction
Nonlinear Analysis: Theory, Methods and Applications
(2005) Memory and learning of sequential patterns by nonmonotone neural networks
Neural Networks
(1996)
Neural networks and physical systems with emergent collective computational abilities
Proceedings of the National Academy of Sciences
Cited by (8)
Multistability in impulsive hybrid Hopfield neural networks with distributed delays
2011, Nonlinear Analysis: Real World ApplicationsCitation Excerpt :On the other hand, if neural networks are used to analyze associative memories, the existence of several locally asymptotically stable steady states is required (i.e. multistability), as they store information and constitute distributed and parallel neural memory networks. Many research results on multistability of neural networks have been reported in [30–42]. Multistability analysis is essentially different from mono-stability analysis.
Impulsive hybrid discrete-time Hopfield neural networks with delays and multistability analysis
2011, Neural NetworksCitation Excerpt :On the other hand, if neural networks are used to analyze associative memories, the existence of several locally asymptotically stable steady states is required (i.e. multistability), as they store information and constitute distributed and parallel neural memory networks. Many research results on multistability of continuous-time neural networks have been reported in Cao, Feng, and Wang (2008), Cheng, Lin, and Shih (2006), Cheng, Lin, and Shih (2007), Huang and Cao (2008a), Huang and Cao (2008b), Kaslik and Balint (2006), Shih and Tseng (2008), Wang, Lu, and Chen (2009), Yi, Tan, and Lee (2003) and Zhang, Yi, Yu, and Heng (2009). Multistability analysis is essentially different from mono-stability analysis.
Multiple periodic solutions in impulsive hybrid neural networks with delays
2011, Applied Mathematics and ComputationCitation Excerpt :The impulsive hybrid control method [11–13] has been widely used to stabilize dynamical systems originating from diverse fields such as medicine, biology, economics, etc. Studying neural network dynamics not only involves discussions of global exponential stability [14–21] and multistability properties [22–25], but also requires exploring other behaviors such as periodic solutions, bifurcations and chaos. The existence of a unique globally exponentially stable periodic solution of impulsive neural networks has been recently investigated in [26–30].
Survey of dynamics of Cohen–Grossberg-type RNNs
2016, Studies in Systems, Decision and ControlQualitative analysis and control of complex neural networks with delays
2015, Qualitative Analysis and Control of Complex Neural Networks with DelaysApplying different learning rules in neuro-symbolic integration
2012, Advanced Materials Research