Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the much-required uncertainty modelling and robustness capabilities which are crucial for their use in several real-world applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the end-time $T$ of the ODE solver. The proposed approach, latent time NODE (LT-NODE), treats $T$ as a latent variable and apply Bayesian learning to obtain a posterior distribution over $T$ from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As $T$ implicitly defines the depth of a NODE, posterior distribution over $T$ would also help in model selection in NODE. We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times. ALT-NODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.
翻译:提议将普通神经差异方程式(NODE)作为广受欢迎的深层学习模型(如残余网络(ResNets))的连续深度概观,以提供参数效率,并在一定程度上将模型选择过程自动化,在深层学习模型中提供参数效率,将模型选择过程自动化,但是,这些模型缺乏非常需要的不确定性建模和稳健能力,而这些能力对于在诸如自主驱动和医疗保健等现实世界应用中使用这些模型至关重要。我们建议对模型的不确定性采取一种新颖和独特的方法,即考虑将ODE解解解码软件的终时价美元分配。拟议的方法,即潜值时间NODE(LT-NODE),将美元作为潜值时间分配作为潜值变量处理,并应用Bayesian的学习过程,从数据中获取超过$T的远值分布。特别是,我们使用变式推法推论来学习一个近的场景图,通过单一前传,可以有效地完成NODE的NODE展示。