Skip to main content

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 184))

  • 1014 Accesses

Abstract

We study in a quantitative way the efficacy of a social intelligence scheme that is an extension of Extreme Learning Machine paradigm. The key question we investigate is whether and how a collection of elementary learning parcels can replace a single algorithm that is well suited to learn a relatively complex function. Per se, the question is definitely not new, as it can be met in various fields ranging from social networks to bio-informatics. We use a well known benchmark as a touchstone to contribute its answer with both theoretical and numerical considerations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Apolloni, B., Kurfess, F. (eds.): From Synapses to Rules—Discovering Symbolic Rules from Neural Processed Data. Kluwer Academic/Plenum Publishers, New York (2002)

    Google Scholar 

  2. Dietterich T.G.: Ensemble methods in machine learning. In: Multiple Classifier Systems, pp. 1–15. Springer Berlin Heidelberg, Berlin, Heidelberg (2000)

    Google Scholar 

  3. Apolloni, B., Malchiodi, D., Taylor, J.: Learning by gossip: a principled information exchange model in social networks. Cogn. Comput. 5, 327–339 (2013). https://doi.org/10.1007/s12559-013-9211-6

  4. Huang, G.-B., Chen, L.: Convex incremental extreme learning machine. Neurocomputing 70(16), 3056–3062 (2007)

    Article  Google Scholar 

  5. Lukoševičius, M., Jaeger, H.: Survey: reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    Article  Google Scholar 

  6. Lukoševičius, M.: A Practical Guide to Applying Echo State Networks, pp. 659–686. Springer, Berlin Heidelberg, Berlin, Heidelberg (2012)

    Google Scholar 

  7. Zhang, Y., Li, P., Jin, Y., Choe, Y.: A digital liquid state machine with biologically inspired learning and its application to speech recognition. IEEE Trans. Neural Networks Learn. Syst. 26(11), 2635–2649 (2015)

    Article  MathSciNet  Google Scholar 

  8. Wilks, S.S.: Mathematical Statistics, Wiley Publications in Statistics. Wiley, New York (1962)

    Google Scholar 

  9. Hannig, J.: On generalized fiducial inference. Statistica Sinica 19(2), 491–544 (2009)

    MathSciNet  MATH  Google Scholar 

  10. Iyer, H.K., Patterson, P.: A recipe for constructing generalized pivotal quantities and generalized confidence intervals, Tech. Rep. 2e002/10, Department of Statistics, Colorado State University (2002)

    Google Scholar 

  11. Martin, R., Liu, C.: Inferential models: a framework for prior-free posterior probabilistic inference. J. Am. Stat. Assoc. 108(501), 301–313 (2013)

    Article  MathSciNet  Google Scholar 

  12. Apolloni, B., Pedrycz, W., Bassis, S., Malchiodi, D.: The Puzzle of Granular Computing. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-79864-4

  13. Apolloni, B., Bassis, S., Malchiodi, D.: Compatible worlds. Nonlinear Anal.: Theory, Methods Appl. 71(12), e2883–e2901 (2009)

    Article  Google Scholar 

  14. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)

    Book  Google Scholar 

  15. Apolloni, B., Malchiodi, D.: Gaining degrees of freedom in subsymbolic learning. Theoret. Comput. Sci. 255, 295–321 (2001)

    Article  MathSciNet  Google Scholar 

  16. Abu-Mostafa, Y.S.: Hints and the vc dimension. Neural Comput. 5(2), 278–288 (1993)

    Article  MathSciNet  Google Scholar 

  17. Baum, E.B., Haussler, D.: What size net gives valid generalization? Neural Comput. 1(1), 151–160 (1989)

    Article  Google Scholar 

  18. Apolloni, B., Malchiodi, D., Gaito, S.: Algorithmic Inference in Machine Learning, vol. 5. Advanced Knowledge International Pty, ADELAIDE—AUS (2006)

    Google Scholar 

  19. Corke, P.I.: A robotics toolbox for matlab. IEEE Robot. Autom. Mag. 3(1), 24–32 (1996)

    Article  Google Scholar 

  20. Apolloni, B., Bassis, S., Valerio, L.: Training a network of mobile neurons. In: The 2011 International Joint Conference on Neural Networks, pp. 1683–1691 (2011)

    Google Scholar 

  21. Rasmussen, C.E., Neal, R.M., Hinton, G.E., van Camp, D., Revow, M., Ghahramani, Z., Kustra, R., Tibshirani, R.J.: The delve manual (1996). http://www.cs.toronto.edu/~delve/

  22. Cesa-Bianchi, N., Mansour, Y., Shamir, O.: On the complexity of learning with kernels, CoRR abs/1411.1158. http://arxiv.org/abs/1411.1158

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruno Apolloni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Apolloni, B., Shehhi, A.A., Damiani, E. (2021). The Simplification Conspiracy. In: Esposito, A., Faundez-Zanuy, M., Morabito, F., Pasero, E. (eds) Progresses in Artificial Intelligence and Neural Systems. Smart Innovation, Systems and Technologies, vol 184. Springer, Singapore. https://doi.org/10.1007/978-981-15-5093-5_2

Download citation

Publish with us

Policies and ethics