Virtual presentation / poster accept
Critic Sequential Monte Carlo
Vasileios Lioutas · Jonathan Lavington · Justice Sefas · Matthew Niedoba · Yunpeng Liu · Berend Zwartsenberg · Setareh Dabiri · Frank Wood · Adam Scibior
Keywords: [ sequential monte carlo ] [ driving behavior models ] [ heuristic factors ] [ soft Q-learning ] [ reinforcement learning as inference ] [ Probabilistic Methods ]
We introduce CriticSMC, a new algorithm for planning as inference built from a composition of sequential Monte Carlo with learned Soft-Q function heuristic factors. These heuristic factors, obtained from parametric approximations of the marginal likelihood ahead, more effectively guide SMC towards the desired target distribution, which is particularly helpful for planning in environments with hard constraints placed sparsely in time. Compared with previous work, we modify the placement of such heuristic factors, which allows us to cheaply propose and evaluate large numbers of putative action particles, greatly increasing inference and planning efficiency. CriticSMC is compatible with informative priors, whose density function need not be known, and can be used as a model-free control algorithm. Our experiments on collision avoidance in a high-dimensional simulated driving task show that CriticSMC significantly reduces collision rates at a low computational cost while maintaining realism and diversity of driving behaviors across vehicles and environment scenarios.