In a previous paper, we have proposed a set of concepts, axiom schemata and algorithms that can be used by agents to learn to describe their behaviour, goals, capabilities, and environment. The current paper proposes a new set of concepts, axiom schemata and algorithms that allow the agent to learn new descriptions of an observed behaviour (e.g., perplexing actions), of its actor (e.g., undesired propositions or actions), and of its environment (e.g., incompatible propositions). Each learned description (e.g., a certain action prevents another action from being performed in the future) is represented by a relationship between entities (either propositions or actions) and is learned by the agent, just by observation, using domain-independent axiom schemata and or learning algorithms. The relations used by agents to represent the descriptions they learn were inspired on the Theory of Rhetorical Structure (RST). The main contribution of the paper is the relation family Although, inspired on the RST relation Concession. The accurate definition of the relations of the family Although involves a set of deontic concepts whose definition and corresponding algorithms are presented. The relations of the family Although, once extracted from the agent's observations, express surprise at the observed behaviour and, in certain circumstances, present a justification for it. The paper shows results of the presented proposals in a demonstration scenario, using implemented software.
翻译:在前一份文件中,我们提出了一套概念、轴心模型和算法,供代理人用来学习描述其行为、目标、能力和环境,本文件提出一套新的概念、轴心模型和算法,使代理人能够学习关于观察到的行为(例如,令人感到困惑的行动)、其行为人的行为(例如,不理想的主张或行动)和环境(例如,不相容的提议)的新描述,每个学得来的说明(例如,某种行动妨碍今后采取另一种行动)都体现于实体之间的关系(提议或行动),并由代理人学习一套新的概念、轴心模型和算法,使代理人能够学习关于观察到的行为(例如,令人困惑的行动)、其行为人的行为(例如,不受欢迎的提议或行动)和环境(例如,不相容的建议),每个学得来的说明(例如,某种行动阻止今后采取另一种行动)都由实体之间的关系(提议或行动)所代表,并由代理人仅通过观察、使用视域为主的轴心法概念或学习算法学算来准确界定家庭关系。 代理人所用的描述关系是目前所观察到的逻辑,在某种行为上采用的一种典型的推理学行为,在某种推论中,其推论中,其为某种推理行为的推理。