Multi-Agent Reinforcement Learning (MARL) is vulnerable to Adversarial Machine Learning (AML) attacks and needs adequate defences before it can be used in real world applications. We have conducted a survey into the use of execution-time AML attacks against MARL and the defences against those attacks. We surveyed related work in the application of AML in Deep Reinforcement Learning (DRL) and Multi-Agent Learning (MAL) to inform our analysis of AML for MARL. We propose a novel perspective to understand the manner of perpetrating an AML attack, by defining Attack Vectors. We develop two new frameworks to address a gap in current modelling frameworks, focusing on the means and tempo of an AML attack against MARL, and identify knowledge gaps and future avenues of research.
翻译:多机构加强学习(MARL)很容易受到反向机械学习(AML)的攻击,需要适当的防御,才能用于现实世界的应用。我们调查了对MARL进行执行时的反洗钱攻击的使用和对这些攻击的防御。我们调查了在深层强化学习(DRL)和多机构学习(MAL)中应用反洗钱措施的相关工作,以便为我们对MAL进行反洗钱措施的分析提供信息。我们提出了一个新观点,通过界定攻击矢量来理解进行反洗钱攻击的方式。我们制定了两个新框架,以解决目前建模框架中的差距,重点是反洗钱措施对MARL进行攻击的手段和节奏,并查明知识差距和未来的研究途径。