Skip to yearly menu bar Skip to main content


Poster

FairVLM: Mitigating Bias In Pre-Trained Vision-Language Models

Sepehr Dehdashtian · Lan Wang · Vishnu Boddeti

Halle B
[ ]
Thu 9 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Large pre-trained vision-language models (VLMs) provide compact and general-purpose representations of text and images that are demonstrably effective across multiple downstream vision and language tasks. However, owing to the nature of their training process, these models have the potential to 1) propagate or amplify societal biases in the training data, and 2) learn to rely on spurious features. Thispaper proposes FairVLM, a general approach for making the zero-shot prediction of VLMs more fair and robust to spurious correlations. We formulate the problem of jointly debiasing VLMs’ image and text representations in reproducing kernel Hilbert spaces (RKHSs), which affords multiple benefits: 1) Flexibility: Unlike existing approaches, which are specialized to either learn with or without ground-truth labels, FairVLM is adaptable to learning in both scenarios, 2) Ease of Optimization: FairVLM lends itself to an iterative optimization involving closed-form solvers, which leads to 4×-10× faster training than the existing methods, 3) Sample Efficiency: Under sample-limited conditions, FairVLM significantly outperforms baselines when they fail entirely, and 4) Performance: Empirically, FairVLM achieves appreciable zero-shot accuracy gains on benchmark fairness and spurious correlation datasets over their respective baselines.

Chat is not available.