We present a method for learning neural representations of flow maps from time-varying vector field data. The flow map is pervasive within the area of flow visualization, as it is foundational to numerous visualization techniques, e.g. integral curve computation for pathlines or streaklines, as well as computing separation/attraction structures within the flow field. Yet bottlenecks in flow map computation, namely the numerical integration of vector fields, can easily inhibit their use within interactive visualization settings. In response, in our work we seek neural representations of flow maps that are efficient to evaluate, while remaining scalable to optimize, both in computation cost and data requirements. A key aspect of our approach is that we can frame the process of representation learning not in optimizing for samples of the flow map, but rather, a self-consistency criterion on flow map derivatives that eliminates the need for flow map samples, and thus numerical integration, altogether. Central to realizing this is a novel neural network design for flow maps, coupled with an optimization scheme, wherein our representation only requires the time-varying vector field for learning, encoded as instantaneous velocity. We show the benefits of our method over prior works in terms of accuracy and efficiency across a range of 2D and 3D time-varying vector fields, while showing how our neural representation of flow maps can benefit unsteady flow visualization techniques such as streaklines, and the finite-time Lyapunov exponent.
翻译:我们提出一种方法,从时间变化矢量实地数据中学习流动图的神经表现;流动图在流动可视化领域十分普遍,因为它是许多可视化技术的基础,例如,对路径线或直线进行整体曲线计算,以及在流动场中计算分离/吸引结构;然而,流动图计算中的瓶颈,即矢量字段的数值整合,很容易抑制其在互动可视化环境中的使用;作为回应,我们在工作中寻求流动图的神经表现,在计算成本和数据要求方面都能够进行最优化。我们的方法的一个重要方面是,我们可以将代表学习过程与对流动图样本的优化无关,而是对流动图衍生物的自我一致性标准,从而消除对流动图样本的需求,从而实现数字整合。实现这一点的核心是流动图的新颖的神经网络设计,加上一个优化计划,其中我们的代表性仅要求时间变化矢量字段学习、对瞬时不透明速度进行编码。我们的方法的一个关键方面是,我们之前的矢量序列的精确度,同时显示我们的方法在前行中的效率范围2,在前的轨道上可以显示我们的方法的稳定性场上,在前的稳定性场上,可以展示我们的方法的稳定性的准确度。