Poster
in
Workshop: Multimodal Representation Learning (MRL): Perks and Pitfalls
Analyzing Multimodal Objectives Through the Lens of Generative Diffusion Guidance
Chaerin Kong · Nojun Kwak
Keywords: [ diffusion ] [ generative models ] [ vision-language ]
Recent years have witnessed astonishing advances in the field of multimodal representation learning, with contrastive learning being the cornerstone for major breakthroughs. Latest works delivered further improvements by incorporating different objectives such as masked modeling and captioning into the frameworks, but our understanding on how these objectives facilitate learning remains vastly incomplete. In this paper, we leverage the fact that classifier-guided diffusion models generate images that reflect the semantic signals provided by the classifier to study the characteristics of multimodal learning objectives. Specifically, we compare contrastive, matching and captioning loss in terms of their semantic signals, and introduce a simple baseline that not only supports our analyses but also improves the quality of generative guidance in a straightforward manner.