Skip to main content
Log in

Sensitivity Analysis for Decision Boundaries

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

A novel approach is presented to visualize and analyze decision boundaries for feedforward neural networks. First order sensitivity analysis of the neural network output function with respect to input perturbations is used to visualize the position of decision boundaries over input space. Similarly, sensitivity analysis of each hidden unit activation function reveals which boundary is implemented by which hidden unit. The paper shows how these sensitivity analysis models can be used to better understand the data being modelled, and to visually identify irrelevant input and hidden units.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Baum, E. B.: Neural net algorithms that learn in polynomial time from examples and queries, IEEE Transactions on Neural Networks 2(1) (1991), 5–19.

    Google Scholar 

  2. Belue, L. M. and Bauer, K. W.: Determining input features for multilayer perceptrons, Neurocomputing 7 (1995), 111–121.

    Google Scholar 

  3. Chauvin, Y.: Dynamic behavior of constrained back-propagation networks, In: D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2 (1990), 642–649.

  4. Cibas, T., Fogelman Soulié, F., Gallinari, P. and Raudys, S.: Variable selection with neural networks, Neurocomputing 12 (1996), 223–248.

    Google Scholar 

  5. Cohn, D., Atlas, L. and Ladner, R.: Improving generalization with active learning, Machine Learning 15 (1994), 201–221.

    Google Scholar 

  6. Engelbrecht, A. P., Cloete, I. and Zurada, J. M.: Determining the significance of input parameters using sensitivity analysis, International Workshop on Artificial Neural Networks, Torremolinos, Spain, In: J. Mira and F. Sandoval (eds.), From Natural to Artificial Neural Computing in the series Lecture Notes in Computer Science 930 1995, 382–388.

  7. Engelbrecht, A. P. and Cloete, I.: A sensitivity analysis algorithm for pruning feedforward neural networks, IEEE International Conference on Neural Networks, Washington 2 1996, 1274–1277.

    Google Scholar 

  8. Engelbrecht, A. P. and Cloete, I.: Selective learning using sensitivity analysis, IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Anchorage, Alaska (1998), 1150–1155.

  9. Fletcher, L., Katkovnik, V., Steffens, F. E. and Engelbrecht, A. P.: Optimizing the number of hidden nodes of a feedforward artificial neural network, IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Anchorage, Alaska (1998), 1608–1612.

  10. Goh, T.-H.: Semantic extraction using neural network modelling and sensitivity analysis, Proceedings of the 1993 International Joint Conference on Neural Networks, 1993, 1031–1034.

  11. Hassibi, B., Stork, D. G. and Wolff, G.: Optimal brain surgeon: extensions and performance comparisons, In: J. D. Cowan, G. Tesauro and J. Alspector (eds.), Advances in Neural Information Processing Systems 6 (1994), 263–270.

  12. Hirose, Y., Yamashita, K. and Hijiya, S.: Back-propagation algorithm which varies the number of hidden units, Neural Networks 4 (1991), 61–66.

    Google Scholar 

  13. Hüning, H.: A node splitting algorithm that reduces the number of connections in a Hamming distance classifying network, International Workshop on Artificial Neural Networks, In: New Trends in Neural Computation, J. Mira, J. Castebany and A. Prieto (eds.), in the series Lecture Notes in Computer Science, Springer-Verlag, Berlin, 1993, pp. 102–107.

    Google Scholar 

  14. Hwang, J.-N., Choi, J. J., Oh, S. and Marks II, R.J.: Query-Based Learning Applied to Partially Trained Multilayer Perceptrons, IEEE Transactions on Neural Networks 2(1) (1991), 131–136.

    Google Scholar 

  15. Jutten, C. and Chentouf, P.: A new scheme for incremental learning, Neural Processing Letters 2(1) (1995), 1–4.

    Google Scholar 

  16. Krogh, A. and Hertz, J. A.: A simple weight decay can improve generalization, In: J. Moody, S. J. Hanson and R. Lippmann (eds.), Advances in Neural Information Processing Systems 4 (1992), 950–957.

  17. Kwok, T.-Y. and Yeung, D.-Y.: Constructive Feedforward Neural Networks for Regression Problems: A Survey, Technical Report HKUST–CS95–43, Department of Computer Science, The Hong Kong University of Science & Technology, 1995.

  18. Le Cun, Y., Denker, J. S. and Solla, S. A.: Optimal brain damage, In: D. Touretzky (ed.), Advances in Neural Information Processing Systems 2 (1990), 598–605.

  19. Lee, C. and Landgrebe, D. A.: Decision boundary feature extraction for neural networks, Proceedings of the IEEE (1992), 1053–1058.

  20. Pratt, L. Y. and Christensen, A. N.: Relaxing the hyperplane assumption in the analysis and modificatioon of back-propagation neural networks, In: R. Trappl (ed.), Cybernetics and Systems (1994), 1711–1718.

  21. Viktor, H. L., Engelbrecht, A. P. and Cloete, I.: Incorporating rule extraction from ANNs into a cooperative learning environment, NEURAP98, Neural Networks & Their Applications, Marseilles, France, pp. 386–391, 1998.

  22. Weigend, A. S., Rumelhart, D. E. and Huberman, B. A.: Generalization by weight-elimination with application to forecasting, In: R. Lippmann, J. Moody and D. S. Touretzky (eds.), Advances in Neural Information Processing Systems 3 (1991), 875–882.

  23. Yasui, S.: Convergence suppression and divergence facilitation: minimum and joint use of hidden units by multiple outputs, Neural Networks 10(2) (1997), 353–367.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Engelbrecht, A. Sensitivity Analysis for Decision Boundaries. Neural Processing Letters 10, 253–266 (1999). https://doi.org/10.1023/A:1018748928965

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1018748928965

Navigation