Tsetlin Machine (TM) has been gaining popularity as an inherently interpretable machine leaning method that is able to achieve promising performance with low computational complexity on a variety of applications. The interpretability and the low computational complexity of the TM are inherited from the Boolean expressions for representing various sub-patterns. Although possessing favorable properties, TM has not been the go-to method for AI applications, mainly due to its conceptual and theoretical differences compared with perceptrons and neural networks, which are more widely known and well understood. In this paper, we provide detailed insights for the operational concept of the TM, and try to bridge the gap in the theoretical understanding between the perceptron and the TM. More specifically, we study the operational concept of the TM following the analytical structure of perceptrons, showing the resemblance between the perceptrons and the TM. Through the analysis, we indicated that the TM's weight update can be considered as a special case of the gradient weight update. We also perform an empirical analysis of TM by showing the flexibility in determining the clause length, visualization of decision boundaries and obtaining interpretable boolean expressions from TM. In addition, we also discuss the advantages of TM in terms of its structure and its ability to solve more complex problems.
翻译:Tsetlin Machine(TM)作为内在可解释的机器倾斜方法,在各种应用中以低计算复杂性实现有希望的性能,因此越来越受欢迎。TM的可解释性和低计算复杂性是从代表各种亚模式的布尔语表达方式中继承下来的。尽管TM具有有利的属性,但它并不是AI应用的通向方法,这主要是因为它的概念和理论上的差异,它与透视器和神经网络之间的差别更加广为人知和深入理解。在本文中,我们为TM的操作概念提供了详细见解,并试图弥合对perceptron和TM之间的理论理解差距。更具体地说,我们研究了TM的操作概念,它遵循了透视器的分析结构,显示出了透视器和TM之间的相似性。我们通过分析表明,TM的权重更新可以被视为梯度更新的一个特殊案例。我们还对TM进行了实证分析,展示了在确定条款长度、可视化决定边界和TM之间的理论理解上的差距,并从我们从复杂的表达方式中获得了可解释的优势。