The following briefly discusses possible difficulties in communication with and control of an AGI (artificial general intelligence), building upon an explanation of The Fermi Paradox and preceding work on symbol emergence and artificial general intelligence. The latter suggests that to infer what someone means, an agent constructs a rationale for the observed behaviour of others. Communication then requires two agents labour under similar compulsions and have similar experiences (construct similar solutions to similar tasks). Any non-human intelligence may construct solutions such that any rationale for their behaviour (and thus the meaning of their signals) is outside the scope of what a human is inclined to notice or comprehend. Further, the more compressed a signal, the closer it will appear to random noise. Another intelligence may possess the ability to compress information to the extent that, to us, their signals would appear indistinguishable from noise (an explanation for The Fermi Paradox). To facilitate predictive accuracy an AGI would tend to more compressed representations of the world, making any rationale for their behaviour more difficult to comprehend for the same reason. Communication with and control of an AGI may subsequently necessitate not only human-like compulsions and experiences, but imposed cognitive impairment.
翻译:下文简要地讨论了在与AGI(人工一般情报)进行沟通和控制方面可能出现的困难(人工一般情报),其依据是对Fermi Paradox的解释,以及此前关于标志出现和人工一般情报的工作,后者表明,推断某人的意思,代理人为别人所观察到的行为构筑了理由;然后,通信需要两名代理人在类似强迫下进行劳动,并具有类似的经验(为类似任务制定类似的解决办法);任何非人类情报都可能提出解决办法,使其行为的任何理由(因而其信号的含义)都超出一个人倾向于注意到或理解的内容的范围。此外,信号压缩得越紧,信号就越接近随机噪音。另一种情报可能具备压缩信息的能力,以至于对我们来说,其信号似乎无法与噪音区分开来(为Fermi Paradox作解释),为了便于预测准确性,AGI往往会更难为同一理由理解其行为的任何理由。与AGI的沟通和控制可能随后不仅需要人性强迫和经历,而且需要造成认知缺陷。