TY - GEN
T1 - Training reinforcement learning models via an adversarial evolutionary algorithm
AU - Coletti, Mark
AU - Gunaratne, Chathika
AU - Schuman, Catherine
AU - Patton, Robert
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/8/29
Y1 - 2022/8/29
N2 - When training for control problems, more episodes used in training usually leads to better generalizability, but more episodes also requires significantly more training time. There are a variety of approaches for selecting the way that training episodes are chosen, including fixed episodes, uniform sampling, and stochastic sampling, but they can all leave gaps in the training landscape. In this work, we describe an approach that leverages an adversarial evolutionary algorithm to identify the worst performing states for a given model. We then use information about these states in the next cycle of training, which is repeated until the desired level of model performance is met. We demonstrate this approach with the OpenAI Gym cart-pole problem. We show that the adversarial evolutionary algorithm did not reduce the number of episodes required in training needed to attain model generalizability when compared with stochastic sampling, and actually performed slightly worse.
AB - When training for control problems, more episodes used in training usually leads to better generalizability, but more episodes also requires significantly more training time. There are a variety of approaches for selecting the way that training episodes are chosen, including fixed episodes, uniform sampling, and stochastic sampling, but they can all leave gaps in the training landscape. In this work, we describe an approach that leverages an adversarial evolutionary algorithm to identify the worst performing states for a given model. We then use information about these states in the next cycle of training, which is repeated until the desired level of model performance is met. We demonstrate this approach with the OpenAI Gym cart-pole problem. We show that the adversarial evolutionary algorithm did not reduce the number of episodes required in training needed to attain model generalizability when compared with stochastic sampling, and actually performed slightly worse.
KW - adversarial evolutionary algorithms
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85147435816&partnerID=8YFLogxK
U2 - 10.1145/3547276.3548635
DO - 10.1145/3547276.3548635
M3 - Conference contribution
AN - SCOPUS:85147435816
T3 - ACM International Conference Proceeding Series
BT - 51st International Conference on Parallel Processing, ICPP 2022 - Workshop Proceedings
PB - Association for Computing Machinery
T2 - 51st International Conference on Parallel Processing, ICPP 2022
Y2 - 29 August 2022 through 1 September 2022
ER -