We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an auxiliary display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.
翻译:我们提出了Submerse,这是一个端到端的框架,用于在大型的沉浸式显示生态下可视化洪水场景。具体而言,我们从输入的洪水模拟数据中重建出一个表面网格,并通过合并地理数据如地形、纹理、建筑和其他场景对象,生成一个按比例缩放的三维虚拟场景。为了为大规模模拟数据优化计算和内存性能,我们使用动态四叉树在自适应网格上离散化数据并支持基于细节级别的渲染。此外,为了提供时间点上水淹方向感,我们通过合成水波来使表面网格动起来。由于交互对于有效的决策和分析至关重要,我们介绍了两种在沉浸系统中进行洪水可视化的新技术:(1)使用基于显示布局的标记的关键点生成最佳摄像机视点的自动场景导航方法,(2)使用辅助显示系统的AR-based focus+context技术。 Submerse是计算机科学家和大气科学家的联合研发成果。我们在Stony Brook Reality Deck,一个沉浸式千亿像素设施中,与应急管理人员,领域专家和相关利益相关者合作开展了研讨会,可视化了纽约市超级风暴洪水状况。