Interactive reinforcement learning and error-related potential classification for implicit feedback

Sanghyun Choo, Chang S. Nam

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Implicit feedback-based interactive reinforcement learning (IRL) using error-related potentials (ErrP) is an emerging research topic in artificial intelligence (AI) society. This IRL both efficiently and effectively improves RL algorithm performance by intervening implicitly through ErrP. This approach is closely related to human-centered AI (HCAI) in that human feedback is directly involved in the RL model. However, an understanding of how to classify ErrP and develop the IRL based on human feedback is still needed for people in the HCAI field. Therefore, in this book chapter, we introduce the state-of-the-art machine learning and deep learning methods for ErrP classification and then show their performance using a public brain-computer interface dataset. Also, we introduce the mistake correcting technique, which is one of the IRL methods, and then show the IRL's effectiveness compared to the original RL method based on an RL problem provided in OpenAI Gym. These introductions to the ErrP classification and the IRL will help lower barriers to entry into implicit feedback-based IRL for researchers in the HCAI field.

Original languageEnglish
Title of host publicationHuman-Centered Artificial Intelligence
Subtitle of host publicationResearch and Applications
PublisherElsevier Inc.
Pages127-143
Number of pages17
ISBN (Electronic)9780323856485
ISBN (Print)9780323856492
DOIs
StatePublished - Jan 1 2022
Externally publishedYes

Keywords

  • Deep Learning
  • Electroencephalogram (EEG)
  • Error-related Potential (ErrP)
  • Interactive Reinforcement Learning (IRL)
  • Machine Learning

Fingerprint

Dive into the research topics of 'Interactive reinforcement learning and error-related potential classification for implicit feedback'. Together they form a unique fingerprint.

Cite this