Poster
Scene Transformer: A unified architecture for predicting future trajectories of multiple agents
Jiquan Ngiam · Vijay Vasudevan · Benjamin Caine · Zhengdong Zhang · Hao-Tien (Lewis) Chiang · Jeffrey Ling · Rebecca Roelofs · Alex Bewley · Chenxi Liu · Ashish Venugopal · David Weiss · Ben Sapp · Zhifeng Chen · Jonathon Shlens
Keywords: [ multi-task learning ] [ attention ] [ trajectory prediction ]
Predicting the motion of multiple agents is necessary for planning in dynamic environments. This task is challenging for autonomous driving since agents (e.g., vehicles and pedestrians) and their associated behaviors may be diverse and influence one another. Most prior work have focused on predicting independent futures for each agent based on all past motion, and planning against these independent predictions. However, planning against independent predictions can make it challenging to represent the future interaction possibilities between different agents, leading to sub-optimal planning. In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents. Inspired by recent language modeling approaches, we use a masking strategy as the query to our model, enabling one to invoke a single model to predict agent behavior in many ways, such as potentially conditioned on the goal or full future trajectory of the autonomous vehicle or the behavior of other agents in the environment. Our model architecture employs attention to combine features across road elements, agent interactions, and time steps. We evaluate our approach on autonomous driving datasets for both marginal and joint motion prediction, and achieve state of the art performance across two popular datasets. Through combining a scene-centric approach, agent permutation equivariant model, and a sequence masking strategy, we show that our model can unify a variety of motion prediction tasks from joint motion predictions to conditioned prediction.