Skip to yearly menu bar Skip to main content


Poster

Mind Your Augmentation: The Key to Decoupling Dense Self-Supervised Learning

Congpei Qiu · Tong Zhang · Yanhao Wu · Wei Ke · Mathieu Salzmann · Sabine Susstrunk

Halle B
[ ]
Fri 10 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Dense Self-Supervised Learning (SSL) creates positive pairs by establishing correspondences between regions or points, thereby aiming to preserve local features, for example of individual objects.However, existing approaches tend to couple objects by leaking information from the neighboring contextual regions when the pairs have a limited overlap. In this paper, we first quantitatively identify and confirm the existence of such a coupling phenomenon. We then address it by developing a remarkably simple yet highly effective solution comprising a novel augmentation method, Region Collaborative Cutout (RCC), and a corresponding decoupling branch. Importantly, our design is versatile and can be seamlessly integrated into existing SSL frameworks, whether based on Convolutional Neural Networks (CNNs) or Vision Transformers (ViTs). We conduct extensive experiments, incorporating our solution into two CNN-based and two ViT-based methods, with results confirming the effectiveness of our approach. Moreover, we provide empirical evidence that our method significantly contributes to the disentanglement of feature representations among objects, both in quantitative and qualitative terms.

Chat is not available.