High-fidelity reconstruction of fluids from sparse multiview RGB videos remains a formidable challenge due to the complexity of the underlying physics as well as complex occlusion and lighting in captures. Existing solutions either assume knowledge of obstacles and lighting, or only focus on simple fluid scenes without obstacles or complex lighting, and thus are unsuitable for real-world scenes with unknown lighting or arbitrary obstacles. We present the first method to reconstruct dynamic fluid by leveraging the governing physics (ie, Navier -Stokes equations) in an end-to-end optimization from sparse videos without taking lighting conditions, geometry information, or boundary conditions as input. We provide a continuous spatio-temporal scene representation using neural networks as the ansatz of density and velocity solution functions for fluids as well as the radiance field for static objects. With a hybrid architecture that separates static and dynamic contents, fluid interactions with static obstacles are reconstructed for the first time without additional geometry input or human labeling. By augmenting time-varying neural radiance fields with physics-informed deep learning, our method benefits from the supervision of images and physical priors. To achieve robust optimization from sparse views, we introduced a layer-by-layer growing strategy to progressively increase the network capacity. Using progressively growing models with a new regularization term, we manage to disentangle density-color ambiguity in radiance fields without overfitting. A pretrained density-to-velocity fluid model is leveraged in addition as the data prior to avoid suboptimal velocity which underestimates vorticity but trivially fulfills physical equations. Our method exhibits high-quality results with relaxed constraints and strong flexibility on a representative set of synthetic and real flow captures.
翻译:由于基础物理学的复杂性,以及捕获过程中复杂的封闭性和照明性,从稀有多视图 RGB 视频的流体高度纤维化重建仍然是一项艰巨的挑战。现有的解决方案要么承担对障碍和照明的了解,要么只关注简单的流体场,而没有障碍或复杂的照明,因此不适合使用光度不明或任意障碍的现实世界场景。我们提出第一个方法,利用管理物理(即纳维耶-斯托克斯方程式)的终至终端优化,从稀薄视频(即纳维耶-斯托克斯方程式)重建动态流体,而不考虑照明条件、几何测量信息或边界条件作为投入。我们提供持续的螺旋-时空场展示,利用神经网络网络,作为液体密度和速度解决方案的解析,以及静态物体的光亮场。由于混合结构将静态和动态内容分开,与静态障碍的流体互动首次在重建过程中没有额外的几何模型或人类标签。通过物理透析的精细的精细度增度变亮度变亮度领域,我们从对精度深度的深度学习,我们的方法从图像和直度的精度的直率度变化中得受益。我们从图像的精度的精度的直度的直度变化的精度变化中,从图像和直度变化的精度变化的精度变化到不断的精度变化的精度变化的精度的深度的精度流度变化到不断的深度的深度的深度的精确度流度流度,将逐渐的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的深度的流。