Poster
in
Workshop: Socially Responsible Machine Learning
Maximizing Predictive Entropy as Regularization for Supervised Classification
Amrith Setlur · Benjamin Eysenbach · Sergey Levine
Abstract:
Supervised learning methods that directly optimize the cross entropy loss on training data often overfit. This overfitting is typically mitigated through regularizing the loss function (e.g., label smoothing) or by minimizing the same loss on new examples (e.g., data augmentation and adversarial training). In this work, we propose a complementary regularization strategy: Maximum Predictive Entropy (MPE) forcing the model to be uncertain on new, algorithmically-generated inputs. Across a range of tasks, we demonstrate that our computationally-efficient method improves test accuracy, and the benefits are complementary to methods such as label smoothing and data augmentation.
Chat is not available.