Poster
Improved Active Learning via Dependent Leverage Score Sampling
Atsushi Shimizu · Xiaoou Cheng · Christopher Musco · Jonathan Weare
Halle B
[
Abstract
]
Oral
presentation:
Oral 8B
Fri 10 May 6:45 a.m. PDT — 7:30 a.m. PDT
[
OpenReview]
Fri 10 May 7:30 a.m. PDT
— 9:30 a.m. PDT
Fri 10 May 6:45 a.m. PDT — 7:30 a.m. PDT
Abstract:
We show how to obtain improved active learning methods in the agnostic (adversarial noise) setting by combining marginal leverage score sampling with non-independent sampling strategies that promote spatial coverage. In particular, we propose an easily implemented method based on the \emph{pivotal sampling algorithm}, which we test on problems motivated by learning-based methods for parametric PDEs and uncertainty quantification. In comparison to independent sampling, our method reduces the number of samples needed to reach a given target accuracy by up to $50\%$.We support our findings with two theoretical results. First, we show that any non-independent leverage score sampling method that obeys a weak \emph{one-sided $\ell_{\infty}$ independence condition} (which includes pivotal sampling) can actively learn $d$ dimensional linear functions with $O(d\log d)$ samples, matching independent sampling. This result extends recent work on matrix Chernoff bounds under $\ell_{\infty}$ independence, and may be of interest for analyzing other sampling strategies beyond pivotal sampling. Second, we show that, for the important case of polynomial regression, our pivotal method obtains an improved bound of $O(d)$ samples.
Chat is not available.