In this paper, we focus on exploiting neural networks for the analysis and planning stage in self-adaptive architectures. The studied motivating cases in the paper involve existing (legacy) self-adaptive architectures and their adaptation logic, which has been specified by logical rules. We further assume that there is a need to endow these systems with the ability to learn based on examples of inputs and expected outputs. One simple option to address such a need is to replace the reasoning based on logical rules with a neural network. However, this step brings several problems that often create at least a temporary regress. The reason is the logical rules typically represent a large and tested body of domain knowledge, which may be lost if the logical rules are replaced by a neural network. Further, the black-box nature of generic neural networks obfuscates how the systems work inside and consequently introduces more uncertainty. In this paper, we present a method that makes it possible to endow an existing self-adaptive architectures with the ability to learn using neural networks, while preserving domain knowledge existing in the logical rules. We introduce a continuum between the existing rule-based system and a system based on a generic neural network. We show how to navigate in this continuum and create a neural network architecture that naturally embeds the original logical rules and how to gradually scale the learning potential of the network, thus controlling the uncertainty inherent to all soft computing models. We showcase and evaluate the approach on representative excerpts from two larger real-life use cases.
翻译:在本文中,我们侧重于利用神经网络进行分析和规划阶段的自我适应架构的神经网络; 文件中研究的激励案例涉及现有的(传统)自我适应架构及其适应逻辑,逻辑规则对此已有具体规定; 我们还认为,需要根据投入和预期产出的例子赋予这些系统学习能力; 解决这种需要的一个简单选择是用神经网络取代基于逻辑规则的推理; 然而, 这一步骤带来一些问题, 往往造成至少暂时倒退。 原因是逻辑规则通常代表着一个大而经过测试的域知识体系,如果逻辑规则被神经网络所取代,这些知识可能会丢失。 此外, 通用神经网络的黑箱性质模糊了这些系统在投入和预期产出方面如何运作,从而带来更大的不确定性。 在本文中,我们提出一种方法,使现有的自适应结构能够利用神经网络来学习,同时保存逻辑规则中存在的域知识。 我们在现有的基于规则的网络中引入一个连续不变的系统,从而显示一个基于常规网络的系统是如何逐步学习的。