With the rapid development and integration of artificial intelligence (AI) methods in next-generation networks (NextG), AI algorithms have provided significant advantages for NextG in terms of frequency spectrum usage, bandwidth, latency, and security. A key feature of NextG is the integration of AI, i.e., self-learning architecture based on self-supervised algorithms, to improve the performance of the network. A secure AI-powered structure is also expected to protect NextG networks against cyber-attacks. However, AI itself may be attacked, i.e., model poisoning targeted by attackers, and it results in cybersecurity violations. This paper proposes an AI trust platform using Streamlit for NextG networks that allows researchers to evaluate, defend, certify, and verify their AI models and applications against adversarial threats of evasion, poisoning, extraction, and interference.
翻译:随着下一代网络(下一代网络)人工智能(AI)方法的迅速发展和整合,AI算法在频率频谱使用、带宽、延时和安全方面为NEG提供了巨大优势,NEG的一个关键特征是整合AI,即基于自我监督算法的自学架构,以提高网络的性能。一个安全的AI动力架构也有望保护NEG网络免受网络攻击。然而,AI本身可能受到攻击,即攻击者以模型中毒为目标,从而导致网络安全受侵犯。本文提议使用Streamlit用于NEG网络的AI信托平台,使研究人员能够评估、捍卫、验证和核实其AI模型和应用,以对抗逃税、中毒、抽取和干涉等对抗威胁。