Machine learning models that first learn a representation of a domain in terms of human-understandable concepts, then use it to make predictions, have been proposed to facilitate interpretation and interaction with models trained on high-dimensional data. However these methods have important limitations: the way they define concepts are not inherently interpretable, and they assume that concept labels either exist for individual instances or can easily be acquired from users. These limitations are particularly acute for high-dimensional tabular features. We propose an approach for learning a set of transparent concept definitions in high-dimensional tabular data that relies on users labeling concept features instead of individual instances. Our method produces concepts that both align with users' intuitive sense of what a concept means, and facilitate prediction of the downstream label by a transparent machine learning model. This ensures that the full model is transparent and intuitive, and as predictive as possible given this constraint. We demonstrate with simulated user feedback on real prediction problems, including one in a clinical domain, that this kind of direct feedback is much more efficient at learning solutions that align with ground truth concept definitions than alternative transparent approaches that rely on labeling instances or other existing interaction mechanisms, while maintaining similar predictive performance.
翻译:机械学习模型首先从人类可理解的概念中学习一个域的表示,然后用它来作出预测,提出这些模型是为了便利与受过高维数据培训的模型解释和互动。但是,这些方法有重要的局限性:它们界定概念的方式本质上是无法解释的,它们假定概念标签存在个别情况,或者可以很容易地从用户那里获得。这些局限性对于高维的表格特征来说特别尖锐。我们建议了一种方法,用于在依靠用户标明概念特征而不是个别实例的高维表层数据中学习一套透明的概念定义。我们的方法产生的概念既符合用户对概念含义的直观感,又便于通过透明的机器学习模型预测下游标签。这确保了整个模型是透明的和不直观的,而且鉴于这种制约,尽可能具有预测性。我们用模拟用户对真实预测问题的反馈,包括临床领域的反馈,证明这种直接反馈在学习与地面事实概念定义相一致的解决方案方面比依赖标签实例或其他现有互动机制的替代透明方法更为有效,同时保持类似的预测性业绩。