Abstract
Implicit feedback-based interactive reinforcement learning (IRL) using error-related potentials (ErrP) is an emerging research topic in artificial intelligence (AI) society. This IRL both efficiently and effectively improves RL algorithm performance by intervening implicitly through ErrP. This approach is closely related to human-centered AI (HCAI) in that human feedback is directly involved in the RL model. However, an understanding of how to classify ErrP and develop the IRL based on human feedback is still needed for people in the HCAI field. Therefore, in this book chapter, we introduce the state-of-the-art machine learning and deep learning methods for ErrP classification and then show their performance using a public brain-computer interface dataset. Also, we introduce the mistake correcting technique, which is one of the IRL methods, and then show the IRL's effectiveness compared to the original RL method based on an RL problem provided in OpenAI Gym. These introductions to the ErrP classification and the IRL will help lower barriers to entry into implicit feedback-based IRL for researchers in the HCAI field.
Original language | English |
---|---|
Title of host publication | Human-Centered Artificial Intelligence |
Subtitle of host publication | Research and Applications |
Publisher | Elsevier Inc. |
Pages | 127-143 |
Number of pages | 17 |
ISBN (Electronic) | 9780323856485 |
ISBN (Print) | 9780323856492 |
DOIs | |
State | Published - Jan 1 2022 |
Externally published | Yes |
Keywords
- Deep Learning
- Electroencephalogram (EEG)
- Error-related Potential (ErrP)
- Interactive Reinforcement Learning (IRL)
- Machine Learning