TY - GEN
T1 - Leveraging conventional control to improve performance of systems using reinforcement learning
AU - Eaglin, Gerald
AU - Vaughan, Joshua
N1 - Publisher Copyright:
Copyright © 2020 ASME.
PY - 2020
Y1 - 2020
N2 - While many model-based methods have been proposed for optimal control, it is often difficult to generate model-based optimal controllers for nonlinear systems. One model-free method to solve for optimal control policies is reinforcement learning. Reinforcement learning iteratively trains an agent to optimize a reward function. However, agents often perform poorly at the beginning of training and require a large number of trials to converge to a successful policy. A method is proposed to incorporate domain knowledge of dynamics and control into the controllers using reinforcement learning to reduce the training time needed. Simulations are presented to compare the performance of agents utilizing domain knowledge to those that do not use domain knowledge. The results show that the agents with domain knowledge can accomplish the desired task with less training time than those without domain knowledge.
AB - While many model-based methods have been proposed for optimal control, it is often difficult to generate model-based optimal controllers for nonlinear systems. One model-free method to solve for optimal control policies is reinforcement learning. Reinforcement learning iteratively trains an agent to optimize a reward function. However, agents often perform poorly at the beginning of training and require a large number of trials to converge to a successful policy. A method is proposed to incorporate domain knowledge of dynamics and control into the controllers using reinforcement learning to reduce the training time needed. Simulations are presented to compare the performance of agents utilizing domain knowledge to those that do not use domain knowledge. The results show that the agents with domain knowledge can accomplish the desired task with less training time than those without domain knowledge.
UR - http://www.scopus.com/inward/record.url?scp=85100922775&partnerID=8YFLogxK
U2 - 10.1115/DSCC2020-3307
DO - 10.1115/DSCC2020-3307
M3 - Conference contribution
AN - SCOPUS:85100922775
T3 - ASME 2020 Dynamic Systems and Control Conference, DSCC 2020
BT - Intelligent Transportation/Vehicles; Manufacturing; Mechatronics; Engine/After-Treatment Systems; Soft Actuators/Manipulators; Modeling/Validation; Motion/Vibration Control Applications; Multi-Agent/Networked Systems; Path Planning/Motion Control; Renewable/Smart Energy Systems; Security/Privacy of Cyber-Physical Systems; Sensors/Actuators; Tracking Control Systems; Unmanned Ground/Aerial Vehicles; Vehicle Dynamics, Estimation, Control; Vibration/Control Systems; Vibrations
PB - American Society of Mechanical Engineers
T2 - ASME 2020 Dynamic Systems and Control Conference, DSCC 2020
Y2 - 5 October 2020 through 7 October 2020
ER -