In-Person Poster presentation / poster accept
What Can we Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers?
Ido Galil · Mohammed Dabbah · Ran El-Yaniv
MH1-2-3-4 #24
Keywords: [ reject option ] [ selective prediction ] [ neural networks ] [ deep learning ] [ selective classification ] [ risk coverage trade-off ] [ Deep Learning and representational learning ]
When deployed for risk-sensitive tasks, deep neural networks must include an uncertainty estimation mechanism.Here we examine the relationship between deep architectures and their respective training regimes, with their corresponding selective prediction and uncertainty estimation performance. We consider some of the most popular estimation performance metrics previously proposed including AUROC, ECE, AURC as well as coverage for selective accuracy constraint. We present a novel and comprehensive study of selective prediction and the uncertainty estimation performance of 523 existing pretrained deep ImageNet classifiers that are available in popular repositories.We identify numerous and previously unknown factors that affect uncertainty estimation and examine the relationships between the different metrics. We find that distillation-based training regimes consistently yield better uncertainty estimations than other training schemes such as vanilla training, pretraining on a larger dataset and adversarial training.Moreover, we find a subset of ViT models that outperform any other models in terms of uncertainty estimation performance.For example, we discovered an unprecedented 99% top-1 selective accuracy on ImageNet at 47% coverage(and 95% top-1 accuracy at 80%) for a ViT model, whereas a competing EfficientNet-V2-XL cannot obtain these accuracy constraints at any level of coverage. Our companion paper, also published in ICLR 2023 (A framework for benchmarking class-out-of-distribution detection and its application to ImageNet), examines the performance of these classifiers in a class-out-of-distribution setting.