Deep Learning methods are renowned for their performances, yet their lack of interpretability prevents them from high-stakes contexts. Recent model agnostic methods address this problem by providing post-hoc interpretability methods by reverse-engineering the model's inner workings. However, in many regulated fields, interpretability should be kept in mind from the start, which means that post-hoc methods are valid only as a sanity check after model training. Interpretability from the start, in an abstract setting, means posing a set of soft constraints on the model's behavior by injecting knowledge and annihilating possible biases. We propose a Multicriteria technique that allows to control the feature effects on the model's outcome by injecting knowledge in the objective function. We then extend the technique by including a non-linear knowledge function to account for more complex effects and local lack of knowledge. The result is a Deep Learning model that embodies interpretability from the start and aligns with the recent regulations. A practical empirical example based on credit risk, suggests that our approach creates performant yet robust models capable of overcoming biases derived from data scarcity.
翻译:深层学习方法因其表现而出名,然而其缺乏可解释性却使其无法在高空环境中发挥作用。最近的模型不可知性方法通过逆向设计模型的内部工作来提供超常解释方法来解决这个问题。然而,在许多规范领域,从一开始就应铭记可解释性,这意味着在模式培训之后,后热方法只能作为理智检查才有效。从一开始,在抽象环境中,其解释性意味着通过注入知识和消除可能的偏差,对模型的行为构成一系列软约束。我们提出了一个多标准技术,允许通过在目标功能中注入知识来控制对模型结果的特征影响。然后,我们通过纳入非线性知识功能来扩展技术,以考虑到更为复杂的影响和当地缺乏知识的情况。结果是一个深层次学习模型,体现从一开始的可解释性,并与最近的规则保持一致。基于信用风险的一个实际经验实例表明,我们的方法创造了能够克服数据稀缺所产生的偏差的性强模型。