Federated Learning (FL) is an emerging domain in the broader context of artificial intelligence research. Methodologies pertaining to FL assume distributed model training, consisting of a collection of clients and a server, with the main goal of achieving optimal global model with restrictions on data sharing due to privacy concerns. It is worth highlighting that the diverse existing literature in FL mostly assume stationary data generation processes; such an assumption is unrealistic in real-world conditions where concept drift occurs due to, for instance, seasonal or period observations, faults in sensor measurements. In this paper, we introduce a multiscale algorithmic framework which combines theoretical guarantees of \textit{FedAvg} and \textit{FedOMD} algorithms in near stationary settings with a non-stationary detection and adaptation technique to ameliorate FL generalization performance in the presence of model/concept drifts. We present a multi-scale algorithmic framework leading to $\Tilde{\mathcal{O}} ( \min \{ \sqrt{LT} , \Delta^{\frac{1}{3}}T^{\frac{2}{3}} + \sqrt{T} \})$ \textit{dynamic regret} for $T$ rounds with an underlying general convex loss function, where $L$ is the number of times non-stationary drifts occured and $\Delta$ is the cumulative magnitude of drift experienced within $T$ rounds.
翻译:在更广泛的人工智能研究背景下,联邦学习(FL)是一个新兴领域。与FL相关的方法包含分布式培训模式,由客户和服务器组成,主要目标是实现最佳全球模式,因隐私问题而限制数据共享。值得强调的是,FL现有的多种文献大多假定固定数据生成过程;在现实世界条件下,这种假设是不现实的,在现实世界中,由于季节或时期的观测、传感器测量方面的差错而发生概念漂移。在本文中,我们引入了一个多尺度的算法框架,将近固定环境中的理论保证\textit{FedAvg}和\temomD的算法与非静止检测和适应技术相结合,以便在模型/概念流流流中提高FL的通用性表现。我们提出了一个多尺度的算法框架,导致以美元为单位,例如季节或时期的观测,传感器测量中的差错。我们引入了一个多尺度的算法框架,将时间值($======美元)的递增[T\===xxx的递升=xx的递解的递解(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx