In-Person Poster presentation / poster accept
Personalized Reward Learning with Interaction-Grounded Learning (IGL)
Jessica Maghakian · Paul Mineiro · Kishan Panaganti · Mark Rucker · Akanksha Saran · Cheng Tan
MH1-2-3-4 #51
Keywords: [ interaction-grounded learning ] [ interactive machine learning ] [ recommendation systems ] [ contextual bandits ] [ Applications ]
In an era of countless content offerings, recommender systems alleviate information overload by providing users with personalized content suggestions. Due to the scarcity of explicit user feedback, modern recommender systems typically optimize for the same fixed combination of implicit feedback signals across all users. However, this approach disregards a growing body of work highlighting that (i) implicit signals can be used by users in diverse ways, signaling anything from satisfaction to active dislike, and (ii) different users communicate preferences in different ways. We propose applying the recent Interaction Grounded Learning (IGL) paradigm to address the challenge of learning representations of diverse user communication modalities. Rather than requiring a fixed, human-designed reward function, IGL is able to learn personalized reward functions for different users and then optimize directly for the latent user satisfaction. We demonstrate the success of IGL with experiments using simulations as well as with real-world production traces.