Skip to yearly menu bar Skip to main content


Spotlight Poster

Harnessing Density Ratios for Online Reinforcement Learning

Philip Amortila · Dylan Foster · Nan Jiang · Ayush Sekhari · Tengyang Xie

Halle B
[ ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT
 
Spotlight presentation:

Abstract:

The theories of offline and online reinforcement learning, despite having evolved in parallel, have recently started to see unification, and algorithms/concepts in one setting often have natural counterparts in the other. However, the notion of density ratio modeling, an emerging topic in offline RL, has been largely absent from online RL, perhaps for good reason: the very existence and boundedness of density ratios relies on a dataset with good coverage, but the core challenge in online RL is to collect such an exploratory dataset without having one to start.In this work we show—perhaps surprisingly—that density ratio-based algorithms have online counterparts. Assuming the mere existence of an exploratory distribution with good coverage, a structural condition known as coverability (Xie et al., 2023), we give an algorithm (GLOW) which performs sample-efficient online exploration under value-function and density-ratio realizability. GLOW addressesunbounded density ratios via careful use of truncation, and combines this with optimism to guide exploration. GLOW is computationally inefficient; we complement it with a more efficient counterpart, HYGLOW, for the Hybrid RL setting (Song et al., 2023) in which online RL is augmented with additional offline data. HYGLOW is derived as a special case of a novel meta-algorithm, H2O, which provides a provable black-box reduction from hybrid RL to offline RL.

Chat is not available.