What would the inputs be to a machine whose output is the destabilization of a robust democracy, or whose emanations could disrupt the political power of nations? In the recent essay "The Coming AI Hackers," Schneier (2021) proposed a future application of artificial intelligences to discover, manipulate, and exploit vulnerabilities of social, economic, and political systems at speeds far greater than humans' ability to recognize and respond to such threats. This work advances the concept by applying to it theory from machine learning, hypothesizing some possible "featurization" (input specification and transformation) frameworks for AI hacking. Focusing on the political domain, we develop graph and sequence data representations that would enable the application of a range of deep learning models to predict attributes and outcomes of political, particularly legislative, systems. We explore possible data models, datasets, predictive tasks, and actionable applications associated with each framework. We speculate about the likely practical impact and feasibility of such models, and conclude by discussing their ethical implications.
翻译:投给一个其产出是破坏强大民主的机器,或其影响力会破坏各国政治力量的机器,其投入会是什么?在最近的题为“即将到来的AI Hackers”的文章中,Schneier(2021年)提议今后应用人工智能,以远远大于人类认识和应对这种威胁的能力的速度发现、操纵和利用社会、经济和政治制度的脆弱性。这项工作通过应用机器学习理论、虚构可能存在的AI黑客“Featurization”(投入规格和转变)框架来推进这一概念。我们以政治领域为重点,开发图表和顺序数据表达,以便能够应用一系列深层次的学习模型来预测政治、特别是立法系统的特点和结果。我们探索可能的数据模型、数据集、预测任务以及与每个框架相关的可操作的应用。我们通过讨论这些模型的道德影响来猜测这些模型可能的实际影响和可行性。</s>