Skip to main content

Abstract

This chapter starts with a review of modern machine learning (ML) techniques. Advantages and disadvantages of several ML methods taking into account its application to the automation of analog integrated circuit sizing and placement are considered, in order to create a clear picture of why artificial neural networks (ANNs) are a good fit for both these tasks. Then, an overview of ANNs is presented to introduce the key concepts that are needed for the implementation of the models described in Chaps. 4 and 5. A brief overview of how the ANNs learning mechanism works, the optimization techniques to speed up the convergence of the learning algorithm, and the regularization techniques used to help the models to generalize better to data that they have not seen during training is presented describing the models’ hyper-parameters that must be tuned.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. P. Langley, The changing science of machine learning. Mach. Learn. 82, 275–279 (2011)

    Article  Google Scholar 

  2. T. Bayes, An essay towards solving a problem in the doctrine of chances. Phil. Trans. 53, 370–418 (1763), https://doi.org/10.1098/rstl.1763.0053

  3. P. Diaconis, The Markov chain Monte Carlo revolution. Bull. Am. Math. Soc. 46, 179–205 (2009)

    Article  MathSciNet  Google Scholar 

  4. F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386–408 (1958)

    Article  Google Scholar 

  5. D. Rumelhart, G. Hinton, R. William, Learning representations by back-propagating errors Nature 323(9), 533–536 (1986)

    Article  Google Scholar 

  6. M.I. Jordan, T.M. Mitchell, Machine learning: trends, perspectives and prospects. Science 349, 255–260 (2015)

    Article  MathSciNet  Google Scholar 

  7. J. VanderPlas, Machine learning, in Python data science handbook essential tools for working with data (O’Reilly Media, 2016), p. 541

    Google Scholar 

  8. A. Géron, Hands-on Machine Learning with Scikit-Learn & TensorFlow (O’Reilly, 2017)

    Google Scholar 

  9. AlphaGo Zero: Learning from Scratch, 2017. https://deepmind.com/blog/alphago-zero-learningscratch/. Accessed 4 Oct 2019

  10. M. Banko, E. Brill, Scaling to very very large corpora for natural language disambiguation, in Proceedings of the 39th Annual Meeting on Association for Computational Linguistics—ACL ’01 (2001), pp. 26–33

    Google Scholar 

  11. P. Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, New York, 2015)

    Google Scholar 

  12. S. Karamizadeh, S.M. Abdullah, M. Halimi, J. Shayan, M.J. Rajabi, Advantage and drawback of support vector machine functionality, in International Conference on Computer, Communication, and Control Technology (2014)

    Google Scholar 

  13. J.A. Vrugt, B.A. Robinson, Improved evolutionary optimization from genetically adaptive multimethod search. PNAS 104, 708–711 (2006)

    Article  Google Scholar 

  14. D.B. Fogel, The Advantages of Evolutionary Computation (Natural Selection Inc, 1997)

    Google Scholar 

  15. R. Martins, N. Lourenço, F. Passos, R. Póvoa, A. Canelas, E. Roca, R. Castro-López, J. Sieiro, F.V. Fernández, N. Horta, Two-step RF IC block synthesis with pre-optimized inductors and full layout generation in-the-loop. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. (TCAD) 38(6), 989–1002 (2019)

    Article  Google Scholar 

  16. N. Lourenço, R. Martins, N. Horta, Automatic Analog IC Sizing and Optimization Constrained with PVT Corners and Layout Effects (Springer, 2017)

    Google Scholar 

  17. R. Martins, N. Lourenco, A. Canelas, N. Horta, Electromigration-aware and IR-drop avoidance routing in analog multiport terminal structures, in Design, automation & test in Europe conference (DATE) (Dresden, Germany, 2014), pp. 1–6

    Google Scholar 

  18. R. Martins, N. Lourenço, N. Horta, J. Yin, P. Mak, R. Martins, Many-objective sizing optimization of a class-C/D VCO for ultralow-power IoT and ultralow- phase-noise cellular applications. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 27(1), 69–82 (2019)

    Article  Google Scholar 

  19. F. De Bernardinis, M. I. Jordan, A. SangiovanniVincentelli, Support vector machines for analog circuit performance representation, in Proceedings 2003. Design Automation Conference (IEEE Cat. No. 03CH37451) (Anaheim, CA, 2003)

    Google Scholar 

  20. N. Lourenço, R. Martins, M. Barros, N. Horta, Chapter in analog/RF and mixed-signal circuit systematic design, in Analog Circuit Design based on Robust POFs using an Enhanced MOEA with SVM Models, ed. by M. Fakhfakh, E. Tielo-Cuautle, R. Castro-Lopez (Springer, 2013), pp 149–167

    Google Scholar 

  21. J. Tao et al., Large-scale circuit performance modeling by bayesian model fusion, in Machine Learning in VLSI Computer-Aided Design, ed. by I. Elfadel, D. Boning, X. Li (Springer, Cham, 2019)

    Google Scholar 

  22. W. Lyu, F. Yang, C. Yan, D. Zhou, X. Zeng, Multi-objective Bayesian optimization for analog/RF circuit synthesis, in 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) (San Francisco, CA, 2018)

    Google Scholar 

  23. J. Schmidhuber, Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  24. K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)

    Article  Google Scholar 

  25. Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  26. K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Las Vegas, NV, 2016), pp. 770–778

    Google Scholar 

  27. R. Eldan, O. Shamir, The power of depth for feedforward neural networks, in Conference on Learning Theory (2016), pp 907–940

    Google Scholar 

  28. G.B. Huang, Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans. Neural Netw. 14(2), 274–281 (2003)

    Article  MathSciNet  Google Scholar 

  29. K. Jinchuan, L. Xinzhe, Empirical analysis of optimal hidden neurons in neural network modeling for stock prediction, in Proceedings-2008 Pacific-Asia Workshop on Computational Intelligence and Industrial Application, PACIIA 2008, vol. 2 (2008), pp. 828–832

    Google Scholar 

  30. S. Trenn, Multilayer perceptrons: approximation order and necessary number of hidden units. IEEE Trans. Neural Netw. 19(5), 836–844 (2008)

    Article  Google Scholar 

  31. X. Glorot, Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, vol. 9 (2010), pp. 249–256

    Google Scholar 

  32. V. Nair, G.E. Hinton, Rectified linear units improve restricted boltzmann machines, in Proceedings of the 27th International Conference on International Conference on Machine Learning (2010)

    Google Scholar 

  33. B. Xu, N. Wang, T. Chen, M. Li, Empirical evaluation of rectified activations in convolutional network, in ICML Deep Learning Workshop (Lille, France, 2015)

    Google Scholar 

  34. D.A. Clevert, T. Unterthiner, S. Hochreiter, Fast and accurate deep network learning by Exponential Linear Units (ELUs), in International Conference on Learning Representations (2015), pp. 1–14

    Google Scholar 

  35. A. Cauchy, Méthode générale pour la résolution des systemes d’équations simultanées. Comp. Rend. Sci. Paris 25, 536–538 (1847)

    Google Scholar 

  36. H. Robbins, S. Monro, A stochastic approximation method the annals of mathematical statistics. An. Math. Stat. 22(3), 400–407 (1951)

    Article  Google Scholar 

  37. J. Kiefer, J. Wolfowitz, Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23(3), 462–466 (1952)

    Article  MathSciNet  Google Scholar 

  38. N.S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, P.T.P. Tang, On large-batch training for deep learning: generalization gap and sharp minima, in 5th International Conference on Learning Representations, ICLR 2017—Conference Track Proceedings (2017)

    Google Scholar 

  39. B. Polyak, Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)

    Article  Google Scholar 

  40. Y. Nesterov, A method for solving the convex programming problem with convergence rate O(1/kˆ2). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  41. J. Duchi, E. Hazan, Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  42. D.P. Kingma, J.B, Adam: a method for stochastic optimization. CoRR, abs/1412.6980 (2014)

    Google Scholar 

  43. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to João P. S. Rosa .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Rosa, J.P.S., Guerra, D.J.D., Horta, N.C.G., Martins, R.M.F., Lourenço, N.C.C. (2020). Overview of Artificial Neural Networks. In: Using Artificial Neural Networks for Analog Integrated Circuit Design Automation. SpringerBriefs in Applied Sciences and Technology. Springer, Cham. https://doi.org/10.1007/978-3-030-35743-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-35743-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-35742-9

  • Online ISBN: 978-3-030-35743-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics