Skip to yearly menu bar Skip to main content


Poster

Accurate Forgetting for Heterogeneous Federated Continual Learning

Abudukelimu Wuerkaixi · Sen Cui · Jingfeng Zhang · Kunda Yan · Bo Han · Gang Niu · Lei Fang · Changshui Zhang · Masashi Sugiyama

Halle B
[ ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Recent years have witnessed a burgeoning interest in federated learning (FL).However, the contexts in which clients engage in sequential learning remain under-explored.Bridging FL and continual learning (CL) gives rise to a challenging practical problem: federated continual learning (FCL).Existing research in FCL primarily focuses on mitigating the catastrophic forgetting issue of continual learning while collaborating with other clients. We argue that the forgetting phenomena are not invariably detrimental.In this paper, we consider a more practical and challenging FCL setting characterized by potentially unrelated or even antagonistic data/tasks across different clients.In the FL scenario, statistical heterogeneity and data noise among clients may exhibit spurious correlations which result in biased feature learning.While existing CL strategies focus on a complete utilization of previous knowledge, we found that forgetting biased information is beneficial in our study. Therefore, we propose the new concept accurate forgetting (AF) and develop a novel generative-replay method AF-FCL which selectively utilizes previous knowledge in federated networks.We employ a probabilistic framework based on a normalizing flow model to quantify the credibility of previous knowledge.Comprehensive experiments affirm the superiority of our method over various baselines.Code is at: https://anonymous.4open.science/r/AF-FCL-7D65.

Chat is not available.