We consider the measurement model $Y = AX,$ where $X$ and, hence, $Y$ are random variables and $A$ is an a priori known tall matrix. At each time instance, a sample of one of $Y$'s coordinates is available, and the goal is to estimate $\mu := \mathbb{E}[X]$ via these samples. However, the challenge is that a small but unknown subset of $Y$'s coordinates are controlled by adversaries with infinite power: they can return any real number each time they are queried for a sample. For such an adversarial setting, we propose the first asynchronous online algorithm that converges to $\mu$ almost surely. We prove this result using a novel differential inclusion based two-timescale analysis. Two key highlights of our proof include: (a) the use of a novel Lyapunov function for showing that $\mu$ is the unique global attractor for our algorithm's limiting dynamics, and (b) the use of martingale and stopping time theory to show that our algorithm's iterates are almost surely bounded.
翻译:与对手的在线学习:微分包容分析
我们考虑测量模型 $Y=AX$,其中 $X$ 及其结果 $Y$ 是随机变量,$A$ 为先验已知的纤瘦矩阵。在每个时间点,一个 $Y$ 坐标的样本可用,并通过这些样本估计 $\mu := \mathbb{E}[X]$。但是,挑战在于,由不知名的少数点组成的 $Y$ 坐标受到拥有无限权力的对手的控制:每次请求样本时,他们都可以返回任何实数。针对这种对抗性设置,我们提出了一种异步在线算法,该算法几乎确定地收敛于 $\mu$。我们使用新颖的微分包容两个时间尺度的分析来证明此结果。我们证明的两个关键亮点包括:(a)使用新颖的 Lyapunov 函数表明 $\mu$ 是我们算法极限动态的唯一全局吸引子,以及(b)使用鞅和停时理论表明算法的迭代几乎无疑是有界的。