Page Not Found
Page not found. Your pixels are in another canvas. Read more
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas. Read more
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
. Read more
Published:
I am in search of the way Meandering paths lead me here today I am driving at the speed of sound Darkness in the air around Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool. Read more
Short description of portfolio item number 1
Read more
Short description of portfolio item number 2
Read more
Published in IEEE ICCPCCT, 2018
Continuum robots have been very popular in the recent days due to their wide spread applications in space, defence, medical, underwater, industries etc. Modelling of these types of robots is difficult due to their highly nonlinear dynamic characteristic which necessitates the need for model-less intelligent control. In this paper two intelligent model-less adaptive methods,Reinforcement Learning (RL) and Artificial Neural Network based proportional integral derivative ANN-PID control have been applied on a hardware continuum robot. Here the RL technique involves a continuous state discrete action Q learning method and the ANN-PID is implemented by a single neuron. Performance of both the methods are compared by implementing them on a hardware robot. Read more
Recommended citation: Chattopadhyay, S., Bhattacherjee, S., Bandyopadhyay, S., Sengupta, A. and Bhaumik, S., 2018, March. Control of single-segment continuum robots: reinforcement learning vs. neural network based PID. In 2018 International Conference on Control, Power, Communication and Computing Technologies (ICCPCCT) (pp. 222-226). IEEE. https://ieeexplore.ieee.org/abstract/document/8574225
Published in IEEE IBSSC, 2019
Use of Reinforcement Learning (RL) in designing adaptive self-tuning PID controllers is a relatively new horizon of research with Q-learning and its variants being the predominant algorithms found in the literature. However, the possibility of using an interesting alternative algorithm i.e. Advantage Actor Critic (A2C) in the above context is relatively unexplored. In the present study, Deep Q Networks (DQN) and A2C approaches have been employed to design self-tuning PID controllers. Comparative performance analysis of both the controllers was undertaken in a simulation environment on a servo position control system, with various static and dynamic control objectives, keeping a conventional PID controller as a baseline. A2C based Adaptive PID Controller(A2CAPID) is more promising in trajectory tracking problems whereas DQN based Adaptive PID Controller(DQNAPID) is rather suitable for systems with relatively large plant parameter variations. Read more
Recommended citation: Mukhopadhyay, R., Bandyopadhyay, S., Sutradhar, A. and Chattopadhyay, P., 2019, July. Performance Analysis of Deep Q Networks and Advantage Actor Critic Algorithms in Designing Reinforcement Learning-based Self-tuning PID Controllers. In 2019 IEEE Bombay Section Signature Conference (IBSSC) (pp. 1-6). IEEE. https://ieeexplore.ieee.org/abstract/document/8973068
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown! Read more
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field. Read more
Teaching Assistant, IIEST Shibpur, 2019
Teaching Assistant, IIEST Shibpur, 2020
Teaching Assistant, IIEST Shibpur, 2020
Teaching Assistant, IIT Delhi, 2021
Teaching Assistant, IIT Delhi, 2021
Teaching Assistant, IIT Delhi, 2021