In-Person Poster presentation / top 25% paper
BC-IRL: Learning Generalizable Reward Functions from Demonstrations
Andrew Szot · Amy Zhang · Dhruv Batra · Zsolt Kira · Franziska Meier
MH1-2-3-4 #99
Keywords: [ inverse reinforcement learning ] [ imitation learning ] [ reward learning ] [ reinforcement learning ] [ Reinforcement Learning ]
How well do reward functions learned with inverse reinforcement learning (IRL) generalize? We illustrate that state-of-the-art IRL algorithms, which maximize a maximum-entropy objective, learn rewards that overfit to the demonstrations. Such rewards struggle to provide meaningful rewards for states not covered by the demonstrations, a major detriment when using the reward to learn policies in new situations. We introduce BC-IRL a new inverse reinforcement learning method that learns reward functions that generalize better when compared to maximum-entropy IRL approaches. In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, BC-IRL updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. We show that BC-IRL learns rewards that generalize better on an illustrative simple task and two continuous robotic control tasks, achieving over twice the success rate of baselines in challenging generalization settings.