Dimensionality reduction (DR) is a popular method for preparing and analyzing high-dimensional data. Reduced data representations are less computationally intensive and easier to manage and visualize, while retaining a significant percentage of their original information. Aside from these advantages, these reduced representations can be difficult or impossible to interpret in most circumstances, especially when the DR approach does not provide further information about which features of the original space led to their construction. This problem is addressed by Interpretable Machine Learning, a subfield of Explainable Artificial Intelligence that addresses the opacity of machine learning models. However, current research on Interpretable Machine Learning has been focused on supervised tasks, leaving unsupervised tasks like Dimensionality Reduction unexplored. In this paper, we introduce LXDR, a technique capable of providing local interpretations of the output of DR techniques. Experiment results and two LXDR use case examples are presented to evaluate its usefulness.
翻译:减少尺寸是编制和分析高维数据的一种流行方法。减少数据表示在计算上不那么密集,更便于管理和可视化,同时保留其大部分原始信息。除了这些优点外,这些减少的表示在多数情况下可能难以或不可能解释,特别是当DR方法没有提供进一步资料说明最初空间的哪些特点导致其形成时。这个问题通过可解释的机器学习来解决,这是可解释的人工智能的一个子领域,处理机器学习模型的不透明性。然而,目前关于可解释的机器学习的研究侧重于受监督的任务,留下诸如减少尺寸等不受监督的任务,没有被探讨。在本文件中,我们引入LXDR,这是一种能够对DR技术的产出提供当地解释的技术。实验结果和LDDR的两个案例被用来评价其效用。