Skip to yearly menu bar Skip to main content


Poster

FedImpro: Measuring and Improving Client Update in Federated Learning

Zhenheng Tang · Yonggang Zhang · Shaohuai Shi · Xinmei Tian · Tongliang Liu · Bo Han · Xiaowen Chu

Halle B
[ ]
Fri 10 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Federated Learning (FL) models typically suffer from client drift caused by heterogeneous data, where data distributions vary with clients. To this end, advanced works mainly focus on manipulating exist gradients to obtain more similar client models. In this paper, we propose a different view of client drift and correct it by producing better local models. First, we analyze the generalization contribution of local training and conclude that the generalization contribution of local training is bounded by the conditional Wasserstein distance between clients' data distributions. Then, we propose FedImpro, to constructs similar conditional distributions for local training. Specifically, FedImpro decouples the model into high-level and low-level parts and trains the high-level part on reconstructed feature distributions, causing promoted generalization contribution and alleviated gradient dissimilarity of FL. Experimental results demonstrate that FedImpro can help FL defend against data heterogeneity and improve model generalization

Chat is not available.