Skip to yearly menu bar Skip to main content


Poster

COSA: Concatenated Sample Pretrained Vision-Language Foundation Model

Sihan Chen · Xingjian He · Handong Li · Xiaojie Jin · Jiashi Feng · Jing Liu

Halle B
[ ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Due to the limited scale and quality of video-text training corpus, most vision-language foundation models employ image-text datasets for pretraining and primarily focus on modeling visually semantic representations while disregarding temporal semantic representations and correlations. To address this issue, we propose COSA, a COncatenated SAmple pretrained vision-language foundation model. COSA can jointly model visual contents and event-level temporal cues using only image-text corpora. We achieve this by sequentially concatenating multiple image-text pairs as inputs for pretraining. This transformation effectively converts existing image-text corpora into a pseudo video-paragraph corpus, enabling richer scene transformations and explicit event-description correspondence. Extensive experiments demonstrate that COSA consistently improves performance across a broad range of semantic vision-language downstream tasks, including paragraph-to-video retrieval, text-to-video/image retrieval, video/image captioning and video QA. Notably, COSA achieves state-of-the-art results on various competitive benchmarks. Models and codes will be released.

Chat is not available.