While representation learning has been central to the rise of machine learning and artificial intelligence, a key problem remains in making the learnt representations meaningful. For this the typical approach is to regularize the learned representation through prior probability distributions. However such priors are usually unavailable or ad hoc. To deal with this, we propose a dynamics-constrained representation learning framework. Instead of using predefined probabilities, we restrict the latent representation to follow specific dynamics, which is a more natural constraint for representation learning in dynamical systems. Our belief stems from a fundamental observation in physics that though different systems can have different marginalized probability distributions, they typically obey the same dynamics, such as Newton's and Schrodinger's equations. We validate our framework for different systems including a real-world fluorescent DNA movie dataset. We show that our algorithm can uniquely identify an uncorrelated, isometric and meaningful latent representation.
翻译:虽然表征学习是机器学习和人工智能崛起的核心,但是使学习到的表征有意义仍然是一个关键问题。为了解决这个问题,通常的方法是通过先前的概率分布对学习的表征进行规则化,然而这样的先验通常是不可用或者特定的。为了解决这个问题,我们提出了一个动力约束的表示学习框架。与使用预定义概率不同,我们限制隐变量表征遵循特定的动力学,这是动态系统中表示学习的自然约束。我们的信仰源于物理学中的一个基本观察,即不同系统可以有不同的边缘概率分布,但它们通常服从相同的动力学,例如牛顿和薛定谔方程。我们使用现实世界的荧光DNA电影数据集验证了我们的框架。我们展示了我们的算法能够唯一地识别出不相关,等度量和有意义的潜在表示。