Abstract
A self-tuning PID control strategy using a reinforcement learning method, called CACLA (Continuous Actor-critic Learning Automata) is proposed in this paper with the example application of human-in-the-loop physical assistive control. An advantage of using reinforcement learning is that it can be done in an online manner. Moreover, since human is a time-variant system. The demonstration also shows that the reinforcement learning framework would be beneficial to give semi-supervision signal to reinforce the positive learning performance in any time-step.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ziegler, J.G., Nichols, N.B.: Optimum settings for automatic controllers. Trans. ASME 64(11), 759–765 (1942)
Åström, K.J., Hägglund, T., Astrom, K.J.: Advanced PID Control, vol. 461. ISA-The Instrumentation, Systems, and Automation Society Research Triangle ... (2006)
Sant, A.V., Rajagopal, K.: PM synchronous motor speed control using hybrid fuzzy-PI with novel switching functions. IEEE Trans. Magn. 45(10), 4672–4675 (2009)
Jung, J.-W., Choi, Y.-S., Leu, V., Choi, H.: Fuzzy PI-type current controllers for permanent magnet synchronous motors. IET Electr. Power Appl. 5(1), 143–152 (2011)
Li, Y., Ang, K.H., Chong, G.C.: PID control system analysis and design. IEEE Control Syst. Mag. 26(1), 32–41 (2006)
Orelind, G., Wozniak, L., Medanic, J., Whittemore, T.: Optimal PID gain schedule for hydrogenerators-design and application. IEEE Trans. Energy Convers. 4(3), 300–307 (1989)
Porter, B., Jones, A.: Genetic tuning of digital PID controllers. Electron. Lett. 28(9), 843–844 (1992)
Roa-Sepulveda, C., Pavez-Lazo, B.: A solution to the optimal power flow using simulated annealing. Int. Electr. Power Energy Syst. 25(1), 47–57 (2003)
Eiben, Á.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 3(2), 124–141 (1999)
Lieslehto, J.: PID controller tuning using evolutionary programming. In: Proceedings of the 2001 American Control Conference (Cat. No. 01CH37148), vol. 4, pp. 2828–2833. IEEE (2001)
Orsag, M., et al.: Human-in-the-loop control of multi-agent aerial systems. In: 2016 European Control Conference (ECC), pp. 2139–2145. IEEE (2016)
Peternel, L., Noda, T., Petrič, T., Ude, A., Morimoto, J., Babič, J.: Adaptive control of exoskeleton robots for periodic assistive behaviours based on EMG feedback minimisation. PloS one 11(2), e0148942 (2016)
Albu-Schäffer, A., Ott, C., Hirzinger, G.: A unified passivity-based control framework for position, torque and impedance control of flexible joint robots. Int. J. Robot. Res. 26(1), 23–39 (2007)
Sutton, R.S., Barto, A.G., et al.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)
van Hasselt, H., Wiering, M.: Reinforcement learning in continuous action spaces. In: IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pp. 272–279 (2007)
Hussain, A., et al.: Self-paced reaching after stroke: a quantitative assessment of longitudinal and directional sensitivity using the H-man planar robot for upper limb neurorehabilitation. Front. Neurosci. 10, 477 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhong, J., Li, Y. (2019). Toward Human-in-the-Loop PID Control Based on CACLA Reinforcement Learning. In: Yu, H., Liu, J., Liu, L., Ju, Z., Liu, Y., Zhou, D. (eds) Intelligent Robotics and Applications. ICIRA 2019. Lecture Notes in Computer Science(), vol 11742. Springer, Cham. https://doi.org/10.1007/978-3-030-27535-8_54
Download citation
DOI: https://doi.org/10.1007/978-3-030-27535-8_54
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27534-1
Online ISBN: 978-3-030-27535-8
eBook Packages: Computer ScienceComputer Science (R0)