A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.
翻译:采用了一种新的多式联运感应聚合方法,该技术依赖于一个两阶段过程。在第一阶段,根据无标签的培训数据构建了一种多式联运基因化模型。在第二阶段,该基因化模型作为传感器聚合任务的重建前和搜索程序。该方法还处理只能通过子抽样(即压缩遥感)获得观测的情况。我们展示了多种感应分类、分解和从子抽样观测中回收等一系列多式联运聚合实验的有效性和出色表现。