Skip to yearly menu bar Skip to main content


Poster

Multi-Scale Representations by Varing Window Attention for Semantic Segmentation

Haotian Yan · Ming Wu · Chuang Zhang

Halle B
[ ]
Wed 8 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract: Learning multi-scale representations is central to semantic segmentation. We visualize the effective receptive field (ERF) of canonical multi-scale representations and point out two risks in learning them: \textit{scale inadequacy} and \textit{field inactivation}. To address these issues, a novel multi-scale learner, \textbf{varying window attention} (VWA), is presented. VWA leverages the local window attention (LWA) and disentangles LWA into the query window and context window, allowing the scale of context to vary for the query to learn representations at specific scales. However, varying the context to large-scale windows (enlarging ratio $R$) can significantly increase the memory footprint and computation cost ($R^2$ times larger than LWA). We propose a simple but professional re-scaling strategy to zero the extra induced cost without compromising any performance. In consequence, VWA shows great superiority to previous multi-scale learners. Furthermore, building upon VWA and employing various MLPs, we introduce a multi-scale decoder (MSD), \textbf{VWFormer}, to improve learning multi-scale representations in semantic segmentation. VWFormer achieves efficiency competitive with the most compute-friendly MSDs, like FPN and MLP decoder, but performs much better than any MSDs. For instance, at little extra overhead, $\sim 10$G FLOPs, VWFormer improves Mask2Former by $1.0\%-1.3\%$ mIoU. Using only half of the computation, VWFormer outperforms the popular UperNet by $1.0\%-2.1\%$ mIoU.

Chat is not available.