Skip to yearly menu bar Skip to main content


Spotlight Poster

Adaptive Rational Activations to Boost Deep Reinforcement Learning

Quentin Delfosse · Patrick Schramowski · Martin Mundt · Alejandro Molina Ramirez · Kristian Kersting

Halle B
[ ]
Fri 10 May 1:45 a.m. PDT — 3:45 a.m. PDT
 
Spotlight presentation:

Abstract:

Latest insights from biology show that intelligence not only emerges from the connections between neurons, but that individual neurons shoulder more computational responsibility than previously anticipated. Specifically, neural plasticity should be critical in the context of constantly changing reinforcement learning (RL) environments, yet current approaches still primarily employ static activation functions. In this work, we motivate the use of adaptable activation functions in RL and show that rational activation functions are particularly suitable for augmenting plasticity. Inspired by residual networks, we derive a condition under which rational units are closed under residual connections and formulate a naturally regularised version. The proposed joint-rational activation allows for desirable degrees of flexibility, yet regularises plasticity to an extent that avoids overfitting by leveraging a mutual set of activation function parameters across layers. We demonstrate that equipping popular algorithms with (joint) rational activations leads to consistent improvements on different games from the Atari Learning Environment benchmark, notably making DQN competitive to DDQN and Rainbow.

Chat is not available.