Portfolio item number 1
Short description of portfolio item number 1
Read more
Short description of portfolio item number 1
Read more
Short description of portfolio item number 2
Read more
Published in IEEE ICCPCCT, 2018
Continuum robots have been very popular in the recent days due to their wide spread applications in space, defence, medical, underwater, industries etc. Modelling of these types of robots is difficult due to their highly nonlinear dynamic characteristic which necessitates the need for model-less intelligent control. In this paper two intelligent model-less adaptive methods,Reinforcement Learning (RL) and Artificial Neural Network based proportional integral derivative ANN-PID control have been applied on a hardware continuum robot. Here the RL technique involves a continuous state discrete action Q learning method and the ANN-PID is implemented by a single neuron. Performance of both the methods are compared by implementing them on a hardware robot. Read more
Recommended citation: Chattopadhyay, S., Bhattacherjee, S., Bandyopadhyay, S., Sengupta, A. and Bhaumik, S., 2018, March. Control of single-segment continuum robots: reinforcement learning vs. neural network based PID. In 2018 International Conference on Control, Power, Communication and Computing Technologies (ICCPCCT) (pp. 222-226). IEEE. https://ieeexplore.ieee.org/abstract/document/8574225
Published in IEEE IBSSC, 2019
Use of Reinforcement Learning (RL) in designing adaptive self-tuning PID controllers is a relatively new horizon of research with Q-learning and its variants being the predominant algorithms found in the literature. However, the possibility of using an interesting alternative algorithm i.e. Advantage Actor Critic (A2C) in the above context is relatively unexplored. In the present study, Deep Q Networks (DQN) and A2C approaches have been employed to design self-tuning PID controllers. Comparative performance analysis of both the controllers was undertaken in a simulation environment on a servo position control system, with various static and dynamic control objectives, keeping a conventional PID controller as a baseline. A2C based Adaptive PID Controller(A2CAPID) is more promising in trajectory tracking problems whereas DQN based Adaptive PID Controller(DQNAPID) is rather suitable for systems with relatively large plant parameter variations. Read more
Recommended citation: Mukhopadhyay, R., Bandyopadhyay, S., Sutradhar, A. and Chattopadhyay, P., 2019, July. Performance Analysis of Deep Q Networks and Advantage Actor Critic Algorithms in Designing Reinforcement Learning-based Self-tuning PID Controllers. In 2019 IEEE Bombay Section Signature Conference (IBSSC) (pp. 1-6). IEEE. https://ieeexplore.ieee.org/abstract/document/8973068
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown! Read more
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field. Read more
Teaching Assistant, IIEST Shibpur, 2019
Teaching Assistant, IIEST Shibpur, 2020
Teaching Assistant, IIEST Shibpur, 2020
Teaching Assistant, IIT Delhi, 2021
Teaching Assistant, IIT Delhi, 2021
Teaching Assistant, IIT Delhi, 2021