We propose $\mathcal{T}$ruth $\mathcal{T}$able net ($\mathcal{TT}$net), a novel Convolutional Neural Network (CNN) architecture that addresses, by design, the open challenges of interpretability, formal verification, and logic gate conversion. $\mathcal{TT}$net is built using CNNs' filters that are equivalent to tractable truth tables and that we call Learning Truth Table (LTT) blocks. The dual form of LTT blocks allows the truth tables to be easily trained with gradient descent and makes these CNNs easy to interpret, verify and infer. Specifically, $\mathcal{TT}$net is a deep CNN model that can be automatically represented, after post-training transformation, as a sum of Boolean decision trees, or as a sum of Disjunctive/Conjunctive Normal Form (DNF/CNF) formulas, or as a compact Boolean logic circuit. We demonstrate the effectiveness and scalability of $\mathcal{TT}$net on multiple datasets, showing comparable interpretability to decision trees, fast complete/sound formal verification, and scalable logic gate representation, all compared to state-of-the-art methods. We believe this work represents a step towards making CNNs more transparent and trustworthy for real-world critical applications.
翻译:我们提出$mathcal{T}$ truth $mathcal{T}$gun net (mathcal{TT}$mathcal{TT}$net),这是一个新的革命神经网络(CNN)架构,通过设计解决可解释性、正式核查和逻辑门转换等公开挑战。$\mathcal{TT}$net 是使用CNN过滤器建造的,这些过滤器相当于可移植的真相表格,我们称之为学习真相表格。LTT的双重形式使得真理表格很容易用梯度来训练,使这些CNN很容易解释、核实和推断。具体来说,$\mathcal{TT}$net是一个深重CNN模型,在培训后转换后,可以自动代表可解释性、正式验证和逻辑转换的挑战性,可以自动代表布林决策树的总和互连/交式正常格式(DNF/CNF)公式的总和紧凑的布尔逻辑路路。我们展示了在多数据设置上的所有有效性和可缩略图,显示可比较的可解释性、可比较的正确度的正式校正图。