Poster
Towards Category Unification of 3D Single Object Tracking on Point Clouds
Jiahao Nie · Zhiwei He · Xudong Lv · Xueyi Zhou · Dong-Kyu Chae · Fei Xie
Halle B
Category-specific models are provenly valuable methods in 3D single object tracking (SOT) regardless of Siamese or motion-centric paradigms. However, such over-specialize model designs incur redundant parameters, thus limiting the broader applicability of 3D SOT task. This paper first introduces unified models that can simultaneously track objects across all categories using a single network with shared model parameters. Specifically, we propose to explicitly encode distinct attributes associated to different object categories, enabling the model to adapt to cross-category data. We discover that the attribute variances of point cloud objects primarily occur from the size and shape (e.g., large and square vehicles vs. small and slender humans). Based on this observation, we design a novel point set representation learning network inheriting transformer architecture, termed AdaFormer, which adaptively encodes the dynamically varying shape and size information from cross-category data in a unified manner. We further incorporate the size and shape prior derived from the known template targets into the model’s inputs and learning objective, facilitating the learning of unified representation. Equipped with such designs, we construct two unified models SiamCUT and MoCUT, following the Siamese and motion-centric paradigms, respectively. Extensive experiments demonstrate that the proposed unified models exhibit strong generalization and stability. Furthermore, our unified models outperform the category-specific counterparts by a significant margin (e.g., on KITTI dataset, 12% and 3% performance gains on the Siamese and motion paradigms, respectively). Code and models will be released.