Sketch and speech are intuitive interaction methods that convey complementary information and have been independently used for 3D model retrieval in virtual environments. While sketch has been shown to be an effective retrieval method, not all collections are easily navigable using this modality alone. We design a new challenging database for sketch comprised of 3D chairs where each of the components (arms, legs, seat, back) are independently colored. To overcome this, we implement a multimodal interface for querying 3D model databases within a virtual environment. We base the sketch on the state-of-the-art for 3D Sketch Retrieval, and use a Wizard-of-Oz style experiment to process the voice input. In this way, we avoid the complexities of natural language processing which frequently requires fine-tuning to be robust. We conduct two user studies and show that hybrid search strategies emerge from the combination of interactions, fostering the advantages provided by both modalities.
翻译:在虚拟环境中,3D模型的检索独立使用。虽然草图已被证明是一种有效的检索方法,但并非所有的收藏都很容易使用这种方式。我们设计了一个由3D椅子组成的新的具有挑战性的草图数据库,其中每个组成部分(武器、腿、座椅、背部)都有独立的颜色。为了克服这一点,我们实施了一个用于在虚拟环境中查询3D模型数据库的多式界面。我们把草图建立在3D Strach Retrieval的最新技术上,并使用奥兹异灵学实验来处理语音输入。这样,我们避免了自然语言处理的复杂性,这种复杂性经常需要微调才能保持稳健。我们进行了两项用户研究,并表明混合搜索战略产生于互动的组合,促进了这两种模式所提供的优势。