Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted behaviors of intelligent agents, and at the same time specifying what we want such systems to do. It is of utmost importance that artificial intelligent agents have their values aligned with human values, given the fact that we cannot expect an AI to develop our moral preferences simply because of its intelligence, as discussed in the Orthogonality Thesis. Perhaps this difficulty comes from the way we are addressing the problem of expressing objectives, values, and ends, using representational cognitive methods. A solution to this problem would be the dynamic cognitive approach proposed by Dreyfus, whose phenomenological philosophy defends that the human experience of being-in-the-world cannot be represented by the symbolic or connectionist cognitive methods. A possible approach to this problem would be to use theoretical models such as SED (situated embodied dynamics) to address the values learning problem in AI.
翻译:人工智能(AI)开发领域的专家预测,智能系统和代理的发展进步将改变我们社会的重要领域。然而,如果这种进步不是谨慎地进行,那么它就会给人类带来消极的后果。因此,该领域的几位研究人员正在试图发展一个强大、有益和安全的人工智能概念。目前,在人工智能研究领域,一些公开的问题产生于避免智能代理的不必要行为的困难,同时具体说明我们想要这些系统做什么。至关重要的是,鉴于我们无法期望AI仅仅因为它的智慧而发展我们的道德偏好,正如在Orthogoality Thesis中所讨论的那样。也许这一困难来自我们如何利用代表性认知方法来解决表达目标、价值和目的的问题。这个问题的解决办法将是Dreyfus提出的动态认知方法,其书写哲学捍卫了在世界上的人的经验不能仅仅因为其智慧而得到发展,正如在Orthomicality Thesismismations中所讨论的那样。 一种可能的方法是用这种理论化的模型来解决这个问题。