Skip to main content
Log in

An efficient zeroing neural network for solving time-varying nonlinear equations

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Defining efficient families of recurrent neural networks (RNN) models for solving time-varying nonlinear equations is an interesting research topic in applied mathematics. Accordingly, one of the underlying elements in designing RNN is the use of efficient nonlinear activation functions. The role of the activation function is to bring out an output from a set of input values that are supplied into a node. Our goal is to define new family of activation functions consisting of a fixed gain parameter and a functional part. Corresponding zeroing neural networks (ZNN) is defined, termed as varying-parameter improved zeroing neural network (VPIZNN), and applied to solving time-varying nonlinear equations. Compared with previous ZNN models, the new VPIZNN models reach an accelerated finite-time convergence due to the new time-varying activation function which is embedded into the VPIZNN design. Theoretical results and numerical experiments are presented to demonstrate the superiority of the novel VPIZNN formula. The capability of the proposed VPIZNN models are demonstrated in studying and solving the Van der Pol equation and finding the root \(\root m \of {a(t)}\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28

Similar content being viewed by others

Data availability

All data generated or analyzed during this study are included in this submitted article (and its supplementary information files).

References

  1. Xiao L, Zhang Y (2014) Solving time-varying inverse kinematics problem of wheeled mobile manipulators using Zhang neural network with exponential convergence. Nonlinear Dyn 76(2):1543–1559

    Google Scholar 

  2. Li S, Zhang Y, Jin L (2017) Kinematic control of redundant manipulators using neural networks. IEEE Trans Neural Netw Learn Syst 28(10):2243–2254

    MathSciNet  Google Scholar 

  3. Peng J, Wang J, Wang Y (2011) Neural network based robust hybrid control for robotic system: an H\(_\infty\) approach. Nonlinear Dyn 65(4):421–431

    MathSciNet  MATH  Google Scholar 

  4. Jin L, Zhang Y (2015) Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation. IEEE Trans Neural Netw Learn Syst 26(7):1525–1531

    MathSciNet  Google Scholar 

  5. Zhang Y, Chen S, Li S, Zhang Z (2018) Adaptive projection neural network for kinematic control of redundant manipulators with unknown physical parameters. IEEE Trans Industr Electron 65(6):4909–4920

    Google Scholar 

  6. Jin L, Li S, Luo X, Zhang Y, Qin B (2018) Neural dynamics for cooperative control of redundant robot manipulators. IEEE Trans Industr Inf 14(9):3812–3821

    Google Scholar 

  7. Zhang Z, Zheng L, Yu J, Li Y, Yu Z (2017) Three recurrent neural networks and three numerical methods for solving a repetitive motion planning scheme of redundant robot manipulators. IEEE/ASME Trans Mechatron 22(3):1423–1434

    Google Scholar 

  8. Jin L, Li S, La H, Luo X (2017) Manipulability optimization of redundant manipulators using dynamic neural networks. IEEE Trans Ind Electron 64(6):4710–4720

    Google Scholar 

  9. Xiao L, Li S, Yang J, Zhang Z (2018) A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285:125–132

    Google Scholar 

  10. Zhang Y, Yi C (2011) Zhang neural networks and neural-dynamic method. Nova Science Publishers, New York

    Google Scholar 

  11. Hopfield J, Tank D (1985) neural computation of decisions in optimization problems. Bio Cybern 52:141–152

    MATH  Google Scholar 

  12. Qin S, Le X, Wang J (2017) A neurodynamic optimization approach to bilevel quadratic programming. IEEE Trans Neural Netw Learn Syst 28(11):2580–2591

    MathSciNet  Google Scholar 

  13. Qin S, Yang X, Xue X, Song J (2017) A one-layer recurrent neural network for pseudoconvex optimization problems with equality and inequality constraints. IEEE Trans Cybern 47(10):3063–3074

    Google Scholar 

  14. Liu Q, Huang T, Wang J (2014) One-layer continuous- and discrete-time projection neural networks for solving variational inequalities and related optimization problems. IEEE Trans Neural Netw Learn Syst 25(7):1308–1318

    Google Scholar 

  15. Smith KA (1999) Neural networks for combinatorial optimization: a review of more than a decade of research. Inf J Comput 11:15–34

    MathSciNet  MATH  Google Scholar 

  16. F. Rosenblatt. A theory of statistical separability in cognitive systems. Cornell Aeronautical Laboratory, Inc., Rep. No. VG-1196-G-1. U.S. Department of Commerce, Office of Technical Services, PB 151247 1958

  17. Jo N, Seo J (2000) Input output linearization approach to state observer design for nonlinear systems. IEEE Trans Autom Control 45(12):2388–2393

    MathSciNet  MATH  Google Scholar 

  18. Benchabane A, Bennia A, Charif F, Taleb-Ahmed A (2013) Multi-dimensional Capon spectral estimation using discrete Zhang neural networks. Multidimension Syst Signal Process 24(3):583–598

    MathSciNet  MATH  Google Scholar 

  19. Xiao L, Liao B, Li S, Zhang Z, Ding L, Jin L (2017) Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans Industr Inf 14(1):98–105

    Google Scholar 

  20. Zhang Y, Huang H, Li S, Li J, He L (2020) Event-triggered zeroing dynamics for motion control of Stewart platform. J Franklin Inst 357(11):6453–6470

    MathSciNet  MATH  Google Scholar 

  21. Calvetti D, Reichel L (1996) Application of ADI iterative methods to the restoration of noisy images. SIAM J Matrix Anal Appl 17:165–186

    MathSciNet  MATH  Google Scholar 

  22. Wang J (1993) A recurrent neural network for real-time matrix inversion. Appl Math Comput 55(1):88–100

    Google Scholar 

  23. Wang J (1992) Electronic realisation of recurrent neural network for solving simultaneous linear equations. Electron Lett 28(5):493–495

    Google Scholar 

  24. Raida Z (1994) Improvement of convergence properties of Wang neural network. Electron Lett 30(22):1865–1866

    Google Scholar 

  25. Li Z, Yin Z (2016) Extended Wang neural network for online solving a set of linear equations. Electron Lett 52(125):1001–1003

    Google Scholar 

  26. Stanimirović PS, Petković M (2018) Gradient neural dynamics for solving matrix equations and their applications. Neurocomputing 306:200–212

    Google Scholar 

  27. Stanimirović PS, Petković M (2019) Improved GNN models for constant matrix inversion. Neural Process Lett 50(1):321–339

    Google Scholar 

  28. Wang J, Wu G (1993) Recurrent neural networks for LU decomposition and Cholesky factorization. Math Comput Model 18(6):1–8

    MathSciNet  MATH  Google Scholar 

  29. Cichocki A, Kaczorek T, Stajniak A (1992) Computation of the Drazin inverse of a singular matrix making use of neural networks. Bull Polish Acad Sci Tech Sci 40(4):387–394

    MATH  Google Scholar 

  30. Stanimirović PS, Zivković I, Wei Y (2015) Recurrent neural network for computing the Drazin inverse. IEEE Trans Neural Netw Learn Syst 26(11):2830–2843

    MathSciNet  Google Scholar 

  31. Stanimirović PS, Zivković I, Wei Y (2015) Recurrent neural network approach based on the integral representation of the Drazin inverse. Neural Comput 27:2107–2131

    MathSciNet  MATH  Google Scholar 

  32. Stanimirović PS, Zivković I, Wei Y (2016) Recurrent neural network for computing outer inverse. Neural Comput 28(5):970–998

    MathSciNet  MATH  Google Scholar 

  33. Wang XZ, Ma H, Stanimirović PS (2017) Nonlinearly activated recurrent neural network for computing the Drazin inverse. Neural Comput 46:195–217

    Google Scholar 

  34. Wang XZ, Ma H, Stanimirović PS (2017) Recurrent neural network for computing the W-Weighted Drazin inverse. Appl Math Comput 300:1–20

    MathSciNet  MATH  Google Scholar 

  35. Stanimirović PS, Petković M, Gerontitis D (2018) Gradient neural network with nonlinear activation for computing inner inverses and the Drazin inverse. Neural Process Lett 48:109–133

    Google Scholar 

  36. Zhang Y, Yang Y, Cai B, Guo D (2012) Zhang Neural Network and its application to Newton iteration for matrix square root estimation. Neural Comput Appl 21:453–460

    Google Scholar 

  37. Zhang Y, Yang Y (2008) Simulation and comparison of Zhang Neural Network and Gradient Neural Network solving for time-varying matrix square roots. In: Second International Symposium on Intelligent Information Technology Application, pp. 966-970, https://doi.org/10.1109/IITA.2008.73.

  38. Zhang Y, Ge SS (2005) Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans Neural Netw 16(6):1477–1490

    Google Scholar 

  39. Zhang Y, Guo D (2015) Zhang functions and various models. Springer, Heidelberg

    MATH  Google Scholar 

  40. Zhang Y, Li F, Yang Y, Li Z (2012) Different Zhang functions leading to different Zhang dynamics models illustrated via time-varying reciprocal solving. Appl Math Model 36:4502–4511

    MathSciNet  MATH  Google Scholar 

  41. Zhang Y, Jiang D, Wang J (2002) A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans Neural Netw 13(5):1053–1063

    Google Scholar 

  42. Zhang Y, Yang Y, Cai B, Guo D (2012) Zhang neural network and its application to Newton iteration for matrix square root estimation. Neural Comput Appl 21(3):453–460

    Google Scholar 

  43. Zhang Z, Li S, Zhang X (2016) Simulink comparison of varying-parameter convergent-differential neural-network and gradient neural network for solving online linear time-varying equations. In: 12th World congress on intelligent control and automation (WCICA), 887-894

  44. Petković M, Stanimirović PS (2019) Zeroing neural network based on the equation AXA = A. Springer, Berlin

    MATH  Google Scholar 

  45. Mo C, Wang X, Wei Y (2020) Time-varying generalized tensor eigenanalysis via Zhang neural networks. Neurocomputing 407:465–479

    Google Scholar 

  46. Stanimirović PS, Katsikis V, Zhang Z, Li S, Chen J, Zhou M (2020) Varying-parameter Zhang neural network for approximating some expressions involving outer inverses. Opt Methods Softw 35:1304–1330

    MathSciNet  MATH  Google Scholar 

  47. Stanimirović PS, Katsikis V, Li S (2019) Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 329:129–143

    Google Scholar 

  48. Stanimirović PS, Katsikis V, Li S (2020) Higher-Order ZNN Dynamics. Neural Process Lett 51:697–721

    Google Scholar 

  49. Zhang Z, Deng X, Qu X, Liao B, Kong LD, Li L (2018) A varying-gain recurrent neural network and its application to solving online time-varying matrix equation. IEEE Access 6:77940–77952

    Google Scholar 

  50. Gerontitis D, Behera R, Shi Y, Stanimirovic PS (2022) A robust noise tolerant zeroing neural network for solving time-varying linear matrix equations. Neurocomputing 508:254–274

    Google Scholar 

  51. Gerontitis D, Moysis L, Stanimirović PS, Katsikis V, Volos C (2020) Varying-parameter finite-time zeroing neural network for solving linear algebraic systems. Electron Lett 56(16):810–813

    Google Scholar 

  52. Xiao L (2017) Accelerating a recurrent neural network to finite-time convergence using a new design formula and its application to time-varying matrix square root. J Franklin Inst 354(13):5667–5677

    MathSciNet  MATH  Google Scholar 

  53. Xiao L (2015) A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation. Neurocomputing 167:254–259

    Google Scholar 

  54. Xiao L (2016) A nonlinearly activated neural dynamics and its finite-time solution to time-varying nonlinear equation. Neurocomputing 173:1983–1988

    Google Scholar 

  55. Xiao L (2016) A new design formula exploited for accelerating Zhang Neural Network and its application to time-varying matrix inversion. Theoret Comput Sci 647:50–58

    MathSciNet  MATH  Google Scholar 

  56. Xiao L (2017) A finite-time recurrent neural network for solving online time-varying Sylvester matrix equation based on a new evolution formula. Nonlinear Dyn 90:1581–1591

    MathSciNet  MATH  Google Scholar 

  57. Xiao L, Liao B, Li S, Chen K (2018) Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Nonlinear Dyn 90:102–113

    MATH  Google Scholar 

  58. Zeng Y, Xiao L, Li K, Li J, Li K, Jian Z (2020) Design and analysis of three nonlinearly activated ZNN models for solving time-varying linear matrix inequalities in finite time. Neurocomputing 390:78–87

    Google Scholar 

  59. Zeng Y, Xiao L, Li K, Li K (2020) Solving time-varying linear inequalities by finite-time convergent zeroing neural networks. Neurocomputing 357(12):8137–8155

    MathSciNet  MATH  Google Scholar 

  60. Chun C (2006) Construction of Newton-like iteration methods for solving nonlinear equations. Numer Math 104(3):297–315

    MathSciNet  MATH  Google Scholar 

  61. Abbasbandy S (2003) Improving Newton-Raphson method for nonlinear equations by modified Adomian decomposition method. Appl Math Comput 145(2–3):887–893

    MathSciNet  MATH  Google Scholar 

  62. Sharma JR (2005) A composite third order Newton-Steffensen method for solving nonlinear equations. Appl Math Comput 169(1):242–246

    MathSciNet  MATH  Google Scholar 

  63. Ujević N (2006) A method for solving nonlinear equations. Appl Math Comput 174(2):1416–1426

    MathSciNet  MATH  Google Scholar 

  64. Zhang Y, Zhang Yi, Chen D, Xiao Z, Yan X (2017) From Davidenko method to Zhang dynamics for nonlinear equation systems solving. IEEE Trans Syst Man Cybern 47(11):2817–2830

    Google Scholar 

  65. Gerontitis D, Behera R, Sahoo JK, Stanimirović PS (2020) Improved finite-time zeroing neural network for time-varying division. Stud Appl Math. https://doi.org/10.1111/sapm.12354

    Article  MATH  Google Scholar 

  66. Stanimirović PS, Gerontitis D, Tzekis P, Behera R, Sahoo JK (2021) Simulation of Varying Parameter Recurrent Neural Network with application to matrix inversion. Math Comput Simul 185:614–628

    MathSciNet  MATH  Google Scholar 

  67. Yu F, Liu L, Xiao L, Li K, Cai S (2019) A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function. Neurocomputing 350(20):108–116

    Google Scholar 

  68. Yang M, Zhang Y, Hu H, Qiu B (2019) General 7-instant DCZNN model solving future different-level system of nonlinear inequality and linear equation. IEEE Trans Neural Netw Learn Syst 31(9):3204–3214

    MathSciNet  Google Scholar 

  69. Zhang Y (2005) Revisit the analog computer and gradient-based neural system for matrix inversion. In: Proceedings of IEEE International symposium on intelligent control, Limassol, Cyprus, 1411-1416

  70. Xiao L, Zhang Y (2011) Zhang neural network versus gradient neural network for solving time-varying linear inequalities. IEEE Trans Neural Netw 22(10):1676–1684

    Google Scholar 

  71. Zhang Y, Yi C, Guo D (2011) Comparison on Zhang neural dynamics and gradient-based neural dynamics for online solution of nonlinear time-varying equation. Neural Comput Appl 20(1):1–7

    Google Scholar 

  72. Jin J, Zhao L, Yu F, Xi Z (2019) Improved zeroing neural networks for finite time solving nonlinear equations. Neural Comput Appl 32:4151–4160

    Google Scholar 

  73. Hirch MW, Smale S, Devaney RL (2013) Differential equations, dynamical systems, and an introduction to chaos. Elsevier, Amsterdam

    Google Scholar 

  74. He Y, Yi Q, Liao B, Ding L, Xiao L, Liu P (2020) A Variable Parameter Zeroing Neural Network for resolving time-variant quadratic minimization with preferable performance. In: 2020 12th international conference on advanced computational intelligence (ICACI), IEEE (pp. 38–43)

  75. Zeng Y, Xiao L, Li K, Zuo Q, Li K (2020) Solving time-varying linear inequalities by finite-time convergent zeroing neural networks. J Franklin Inst 357(12):8137–8155

    MathSciNet  MATH  Google Scholar 

  76. Zhang H, Yin H (2022) Zeroing neural network model for solving a generalized linear time-varying matrix equation. AIMS Math 7(2):2266–2280

    MathSciNet  Google Scholar 

  77. Xiao L, Yi Q, Zuo Q, He Y (2020) Improved finite-time zeroing neural networks for time-varying complex Sylvester equation solving. Math Comput Simul 178:246–258

    MathSciNet  MATH  Google Scholar 

  78. Zhu Q, Tan M (2022) A novel activation function based recurrent neural networks and their applications on sentiment classification and dynamic problems solving. Front Neurorobot 16:1022887. https://doi.org/10.3389/fnbot.2022.1022887

    Article  Google Scholar 

  79. Xiao L (2019) A finite-time convergent Zhang neural network and its application to real-time matrix square root finding. Neural Comput Appl 31:793–800

    Google Scholar 

  80. Ding L, Liao B, Lu R, Peng H (2017) An improved recurrent neural network for complex-valued systems of linear equation and its application to robotic motion tracking. Front Neurorobot. https://doi.org/10.3389/fnbot.2017.00045

    Article  Google Scholar 

  81. Dai J, Jia L, Xiao L (2021) Design and Analysis of Two Prescribed-Time and Robust ZNN Models With Application to Time-Variant Stein Matrix Equation. IEEE Trans Neural Netw Learn Syst 32(4):1668–1677

    MathSciNet  Google Scholar 

Download references

Funding

Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, 560012, India. “Savas Parastatidis” named scholarship granted by the Bodossaki Foundation. Ratikanta Behera has received research grant from the Indian Institute of Science, Bangalore, India. Predrag Stanimirović is supported by the Science Fund of the Republic of Serbia, (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications - QUAM). Predrag Stanimirović gratefully acknowledge support from the Ministry of Education, Science and Technological Development, Republic of Serbia, grant no. 451-03-47/2023-01/200124. Dimitrios Gerontitis stood by financial support of the “Savas Parastatidis” named scholarship granted by the Bodossaki Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimitris Gerontitis.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Behera, R., Gerontitis, D., Stanimirović, P. et al. An efficient zeroing neural network for solving time-varying nonlinear equations. Neural Comput & Applic 35, 17537–17554 (2023). https://doi.org/10.1007/s00521-023-08621-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08621-x

Keywords

Mathematics Subject Classification

Navigation