Skip to yearly menu bar Skip to main content


Poster

Label-Noise Robust Diffusion Models

Byeonghu Na · Yeongmin Kim · HeeSun Bae · Jung Hyun Lee · Se Jung Kwon · Wanmo Kang · Il-chul Moon

Halle B
[ ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Conditional diffusion models have shown remarkable performance in various generative tasks, but training them requires large-scale datasets that often contain noise in conditional inputs, a.k.a. noisy labels. This noise leads to condition mismatch and quality degradation of generated data. This paper proposes Transition-aware weighted Denoising Score Matching (TDSM) for training conditional diffusion models with noisy labels, which is the first study in the line of diffusion models. The TDSM objective contains a weighted sum of score networks, where the weights represent instance-wise and time-dependent label transition probabilities. These weights are derived from the relationship between the conditional scores on noisy and clean labels. Also, we introduce a transition-aware weight estimator, which leverages a time-dependent noisy-label classifier distinctively customized to the diffusion process. We conduct experiments on various datasets and noisy label settings, and we verify that models trained with the TDSM objective generate high-quality samples that closely match the given conditions. Furthermore, our models improve generation performance even on benchmark datasets, which implies the potential noisy labels and their risk of generative model learning in prevalent benchmark datasets. Finally, we show improved performance of TDSM on top of conventional noisy label corrections, which empirically proves its contribution as a part of label-noise robust generative models.

Chat is not available.