Deep learning is typically performed by learning a neural network solely from data in the form of input-output pairs ignoring available domain knowledge. In this work, the Constraint Guided Gradient Descent (CGGD) framework is proposed that enables the injection of domain knowledge into the training procedure. The domain knowledge is assumed to be described as a conjunction of hard inequality constraints which appears to be a natural choice for several applications. Compared to other neuro-symbolic approaches, the proposed method converges to a model that satisfies any inequality constraint on the training data and does not require to first transform the constraints into some ad-hoc term that is added to the learning (optimisation) objective. Under certain conditions, it is shown that CGGD can converges to a model that satisfies the constraints on the training set, while prior work does not necessarily converge to such a model. It is empirically shown on two independent and small data sets that CGGD makes training less dependent on the initialisation of the network and improves the constraint satisfiability on all data.
翻译:深层学习通常通过学习完全以投入-产出对等形式提供的数据来学习神经网络,而忽略了现有的领域知识。在这项工作中,提出了将域知识注入培训程序的严格方向梯子框架(CGGD),认为域知识是硬性不平等制约的结合,这似乎是若干应用的自然选择。与其他神经-侧翼方法相比,拟议方法与满足培训数据中任何不平等制约的模型相融合,而不需要首先将制约转化为某些附加于学习(优化)目标的特设术语。在某些条件下,显示CGGD可与满足培训成套限制的模型汇合,而先前的工作不一定与这种模型趋同。在两个独立和小型数据集上的经验显示,CGGD使培训较少依赖网络初始化,并改进所有数据的可比较性制约。