Natural human interactions for Mixed Reality Applications are overwhelmingly multimodal: humans communicate intent and instructions via a combination of visual, aural and gestural cues. However, supporting low-latency and accurate comprehension of such multimodal instructions (MMI), on resource-constrained wearable devices, remains an open challenge, especially as the state-of-the-art comprehension techniques for each individual modality increasingly utilize complex Deep Neural Network models. We demonstrate the possibility of overcoming the core limitation of latency--vs.--accuracy tradeoff by exploiting cross-modal dependencies - i.e., by compensating for the inferior performance of one model with an increased accuracy of more complex model of a different modality. We present a sensor fusion architecture that performs MMI comprehension in a quasi-synchronous fashion, by fusing visual, speech and gestural input. The architecture is reconfigurable and supports dynamic modification of the complexity of the data processing pipeline for each individual modality in response to contextual changes. Using a representative "classroom" context and a set of four common interaction primitives, we then demonstrate how the choices between low and high complexity models for each individual modality are coupled. In particular, we show that (a) a judicious combination of low and high complexity models across modalities can offer a dramatic 3-fold decrease in comprehension latency together with an increase 10-15% in accuracy, and (b) the right collective choice of models is context dependent, with the performance of some model combinations being significantly more sensitive to changes in scene context or choice of interaction.
翻译:混合现实应用的自然人际互动绝大多数是多式的:人类通过视觉、声调和感官提示相结合的方式交流意图和指示;然而,支持低弹性和准确理解这种多式指令(MMI),即资源限制的磨损装置,仍然是一项公开的挑战,特别是因为每种方式的最先进的理解技术日益利用复杂的深神经网络模型。我们展示了克服长期-V.-精确交易的核心局限性的可能性,利用跨模式依赖性(即补偿一种模式的低精度性能,使不同模式的更复杂模型更加精准。我们展示了一个传感器集成结构,以准同步的方式,利用视觉、言语和感官输入的复杂模型。我们展示了克服各种背景变化,对每种模式的数据处理管道的复杂性进行动态调整。使用具有代表性的“教室”背景和一套四种共同的原始背景,用一种更复杂的不同模式来补偿一种更精细的、更精确的模型。我们随后展示了一种更精确的、更精确的、更精确的模型在10种不同的模型中展示了一种高层次的模型。