Convolutional deep sets are the architecture of a deep neural network (DNN) that can model stationary stochastic process. This architecture uses the kernel smoother and the DNN to construct the translation equivariant functional representations, and thus reflects the inductive bias of the stationarity into DNN. However, since this architecture employs the kernel smoother known as the non-parametric model, it may produce ambiguous representations when the number of data points is not given sufficiently. To remedy this issue, we introduce Bayesian convolutional deep sets that construct the random translation equivariant functional representations with stationary prior. Furthermore, we present how to impose the task-dependent prior for each dataset because a wrongly imposed prior forms an even worse representation than that of the kernel smoother. We validate the proposed architecture and its training on various experiments with time-series and image datasets.
翻译:革命深层的组合是一个深神经网络(DNN)的架构,它可以模拟固定式孔径过程。这个架构使用内核滑动器和DNN来构建翻译等同功能代表,从而反映出静态对DN的感化偏差。然而,由于这个架构使用被称为非参数模型的内核滑动器,因此在数据点数量没有得到足够重视时可能会产生模棱两可的表达方式。为了解决这个问题,我们引入了贝叶西亚的深层构造随机翻译等同功能代表器,并使用固定式。此外,我们提出如何将每项数据集的任务事先强加给取决于任务的人,因为错误地将先前的形式强加给内核滑动器的更糟糕的表示方式。我们验证了拟议的架构及其关于使用时间序列和图像数据集进行的各种实验的培训。