Recently, studies on machine learning have focused on methods that use symmetry implicit in a specific manifold as an inductive bias. In particular, approaches using Grassmann manifolds have been found to exhibit effective performance in fields such as point cloud and image set analysis. However, there is a lack of research on the construction of general learning models to learn distributions on the Grassmann manifold. In this paper, we lay the theoretical foundations for learning distributions on the Grassmann manifold via continuous normalizing flows. Experimental results show that the proposed method can generate high-quality samples by capturing the data structure. Further, the proposed method significantly outperformed state-of-the-art methods in terms of log-likelihood or evidence lower bound. The results obtained are expected to usher in further research in this field of study.
翻译:最近,关于机器学习的研究侧重于使用特定方块中隐含的对称作为感应偏差的方法,特别是,使用格拉斯曼方块的方法发现在点云和图像集分析等领域表现出有效的表现,然而,对于建造一般学习模型以学习格拉斯曼方块的分布情况缺乏研究。在本文中,我们为通过连续的正常流来学习格拉斯曼方块的分布情况奠定了理论基础。实验结果显示,拟议的方法可以通过捕捉数据结构产生高质量的样本。此外,拟议的方法在日志相似性或证据约束性较低方面大大超过最新方法。预期取得的结果将带动对这个研究领域的进一步研究。