Skip to main content

Toward Human-in-the-Loop PID Control Based on CACLA Reinforcement Learning

  • Conference paper
  • First Online:
Intelligent Robotics and Applications (ICIRA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11742))

Included in the following conference series:

Abstract

A self-tuning PID control strategy using a reinforcement learning method, called CACLA (Continuous Actor-critic Learning Automata) is proposed in this paper with the example application of human-in-the-loop physical assistive control. An advantage of using reinforcement learning is that it can be done in an online manner. Moreover, since human is a time-variant system. The demonstration also shows that the reinforcement learning framework would be beneficial to give semi-supervision signal to reinforce the positive learning performance in any time-step.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ziegler, J.G., Nichols, N.B.: Optimum settings for automatic controllers. Trans. ASME 64(11), 759–765 (1942)

    Google Scholar 

  2. Åström, K.J., Hägglund, T., Astrom, K.J.: Advanced PID Control, vol. 461. ISA-The Instrumentation, Systems, and Automation Society Research Triangle ... (2006)

    Google Scholar 

  3. Sant, A.V., Rajagopal, K.: PM synchronous motor speed control using hybrid fuzzy-PI with novel switching functions. IEEE Trans. Magn. 45(10), 4672–4675 (2009)

    Article  Google Scholar 

  4. Jung, J.-W., Choi, Y.-S., Leu, V., Choi, H.: Fuzzy PI-type current controllers for permanent magnet synchronous motors. IET Electr. Power Appl. 5(1), 143–152 (2011)

    Article  Google Scholar 

  5. Li, Y., Ang, K.H., Chong, G.C.: PID control system analysis and design. IEEE Control Syst. Mag. 26(1), 32–41 (2006)

    Article  Google Scholar 

  6. Orelind, G., Wozniak, L., Medanic, J., Whittemore, T.: Optimal PID gain schedule for hydrogenerators-design and application. IEEE Trans. Energy Convers. 4(3), 300–307 (1989)

    Article  Google Scholar 

  7. Porter, B., Jones, A.: Genetic tuning of digital PID controllers. Electron. Lett. 28(9), 843–844 (1992)

    Article  Google Scholar 

  8. Roa-Sepulveda, C., Pavez-Lazo, B.: A solution to the optimal power flow using simulated annealing. Int. Electr. Power Energy Syst. 25(1), 47–57 (2003)

    Article  Google Scholar 

  9. Eiben, Á.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 3(2), 124–141 (1999)

    Article  Google Scholar 

  10. Lieslehto, J.: PID controller tuning using evolutionary programming. In: Proceedings of the 2001 American Control Conference (Cat. No. 01CH37148), vol. 4, pp. 2828–2833. IEEE (2001)

    Google Scholar 

  11. Orsag, M., et al.: Human-in-the-loop control of multi-agent aerial systems. In: 2016 European Control Conference (ECC), pp. 2139–2145. IEEE (2016)

    Google Scholar 

  12. Peternel, L., Noda, T., Petrič, T., Ude, A., Morimoto, J., Babič, J.: Adaptive control of exoskeleton robots for periodic assistive behaviours based on EMG feedback minimisation. PloS one 11(2), e0148942 (2016)

    Article  Google Scholar 

  13. Albu-Schäffer, A., Ott, C., Hirzinger, G.: A unified passivity-based control framework for position, torque and impedance control of flexible joint robots. Int. J. Robot. Res. 26(1), 23–39 (2007)

    Article  Google Scholar 

  14. Sutton, R.S., Barto, A.G., et al.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  15. van Hasselt, H., Wiering, M.: Reinforcement learning in continuous action spaces. In: IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pp. 272–279 (2007)

    Google Scholar 

  16. Hussain, A., et al.: Self-paced reaching after stroke: a quantitative assessment of longitudinal and directional sensitivity using the H-man planar robot for upper limb neurorehabilitation. Front. Neurosci. 10, 477 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junpei Zhong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhong, J., Li, Y. (2019). Toward Human-in-the-Loop PID Control Based on CACLA Reinforcement Learning. In: Yu, H., Liu, J., Liu, L., Ju, Z., Liu, Y., Zhou, D. (eds) Intelligent Robotics and Applications. ICIRA 2019. Lecture Notes in Computer Science(), vol 11742. Springer, Cham. https://doi.org/10.1007/978-3-030-27535-8_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-27535-8_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-27534-1

  • Online ISBN: 978-3-030-27535-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics