Abstract
Novel neural network architecture is proposed to solve the nonlinear function decomposition problem. Top-down approach is applied that does not require prior knowledge about the function’s properties. Abilities of our method are demonstrated using synthetic test functions and confirmed by a real-world problem solution. Possible directions for further development of the presented approach are discussed.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Nelles, O.: Nonlinear System Identification. Springer, Berlin (2001)
Haykin, S.: Neural Networks. A Comprehensive Foundation. Prentice Hall, Upper Saddle River (1999)
Pham, D.T., Liu, X.: Neural Networks for Identification, Prediction and Control. Springer, London (1995)
Jang, J.-S.R., Sun, C.-T., Mizutani, E.: Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice Hall, Upper Saddle River (1997)
Takagi, T., Sugeno, M.: Fuzzy identification of systems and its application to modeling and control. IEEE Trans. Systems, Man, and Cybernetics 15, 116–132 (1985)
Yamakawa, T., Uchino, E., Miki, T., Kusanagi, H.: A neo-fuzzy neuron and its applications to system identification and prediction of the system behavior. In: Proc. 2nd Int. Conf. Fuzzy Logic and Neural Networks, Iizuka, Japan, pp. 477–483 (1992)
Tikk, D., Koczy, L.T., Gedeon, T.D.: A survey on universal approximation and its limits in soft computing techniques. International Journal of Approximate Reasoning 33, 185–202 (2003)
Zupan, B., Bratko, I., Bohanec, M., Demsar, J.: Function Decomposition In Machine Learning. In: Paliouras, G., Karkaletsis, V., Spyropoulos, C.D. (eds.) ACAI 1999. LNCS (LNAI), vol. 2049, pp. 71–101. Springer, Heidelberg (2001)
Demsar, J., Zupan, B., Bohanec, M., Bratko, I.: Constructing intermediate concepts by decomposition of real functions. In: van Someren, M., Widmer, G. (eds.) ECML 1997. LNCS, vol. 1224, pp. 93–107. Springer, Heidelberg (1997)
Vaccari, D.A., Wojciecliowski, E.: Neural Networks as Function Approximators: Teaching a Neural Network to Multiply. Proc. IEEE World Congress on Computational Intelligence 4, 2217–2221 (1994)
Lu, X.-J., Li, H.-X.: Sub-domain intelligent modeling based on neural networks. Proc. IEEE World Congress on Computational Intelligence, 445–449 (2008)
Jansen, W.J., Diepenhorst, M., Nijhuis, J.A.G., Spaanenburg, L.: Assembling Engineering Knowledge in a Modular Multi-layer Perception Neural Network. In: Proc. Int. Conf. on Neural Networks, vol. 1, pp. 232–237 (1997)
Kolmogorov, A.N.: On the representation of continuous function of many variables by superpositions of continuous functions of one variable and addition. Doklady Akademii Nauk USSR 114, 953–956 (1957) (in Russian)
Hecht-Nielsen, R.: Kolmogorov’s mapping neural network existence theorem. In: Proc. of the International Conference on Neural Networks, vol. III, pp. 11–14. IEEE Press, New York (1987)
Girosi, F., Poggio, T.: Representation properties of networks: Kolmogorov’s theorem is irrelevant. Neural Computation 1, 465–469 (1989)
Kurkova, V.: Kolmogorov’s Theorem Is Relevant. Neural Computation 3, 617–622 (1991)
Kurkova, V.: Kolmogorov’s theorem and multilayer neural networks. Neural Networks 5, 501–506 (1992)
Katsuura, H., Sprecher, D.A.: Computational Aspects of Kolmogorov’s Superposition Theorem. Neural Networks 7, 455–461 (1994)
Gorban, A.N., Wunsch, D.C.: The General Approximation Theorem. Proc. IEEE World Congress on Computational Intelligence 2, 1271–1274 (1998)
Neruda, R., Stedry, A., Drkosova, J.: Kolmogorov learning for feedforward networks. In: Proc. International Joint Conference on Neural Networks, vol. 1, pp. 77–81 (2001)
Sprecher, D.A., Draghici, S.: Space-filling curves and Kolmogorov superposition-based neural networks. Neural Networks 15, 57–67 (2002)
Igelnik, B., Parikh, N.: Kolmogorov’s Spline Network. IEEE Trans. Neural Networks 14, 725–733 (2003)
Kolodyazhniy, V., Bodyanskiy, Y.: Fuzzy Kolmogorov’s network. In: Proc. 8th Int. Conf. Knowledge-Based Intelligent Information and Engineering Systems, Wellington, New Zealand, pp. 764–771 (2004)
Shepherd, A.J.: Second-Order Methods for Neural Networks (Fast and Reliable Training Methods for Multi-Layer Perceptrons). Springer, London (1997)
Bodyanskiy, Y., Popov, S., Rybalchenko, T.: Feedforward Neural Network with a Specialized Architecture for Estimation of the Temperature Influence on the Electric Load. In: Proc. 4th Int. IEEE Conf. Intelligent Systems, Varna, Bulgaria, vol. I, pp. 7-14-17-18 (2008)
Chambless, B., Lendaris, G.G., Zwick, M.: An Information Theoretic Methodology for Prestructuring Neural Networks. In: Proc. International Joint Conference on Neural Networks, vol. 1, pp. 365–369 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bodyanskiy, Y., Popov, S., Titov, M. (2009). Function Decomposition Network. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds) Artificial Neural Networks – ICANN 2009. ICANN 2009. Lecture Notes in Computer Science, vol 5768. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04274-4_74
Download citation
DOI: https://doi.org/10.1007/978-3-642-04274-4_74
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04273-7
Online ISBN: 978-3-642-04274-4
eBook Packages: Computer ScienceComputer Science (R0)