We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video. To this end, we model the blurred appearance of a fast moving object in a generative fashion by parametrizing its 3D position, rotation, velocity, acceleration, bounces, shape, and texture over the duration of a predefined time window spanning multiple frames. Using differentiable rendering, we are able to estimate all parameters by minimizing the pixel-wise reprojection error to the input video via backpropagating through a rendering pipeline that accounts for motion blur by averaging the graphics output over short time intervals. For that purpose, we also estimate the camera exposure gap time within the same optimization. To account for abrupt motion changes like bounces, we model the motion trajectory as a piece-wise polynomial, and we are able to estimate the specific time of the bounce at sub-frame accuracy. Experiments on established benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
翻译:我们提出了一个方法来共同估计3D运动、 3D形状和从视频中显示高度运动模糊的物体的外观。 为此,我们通过对一个3D位置、 旋转、 速度、 加速、 弹跳、 形状和纹理进行匹配, 来模拟一个快速移动对象的模糊外观, 包括3D 位置、 旋转、 速度、 加速、 弹跳、 形状和纹理, 以预设的时间窗口跨多个框架的期间 。 使用不同的图像, 我们能够通过对输入视频进行后方转换, 将像素智慧的再预测错误最小化到输入视频中, 从而通过在短时间间隔中平均图形输出来计算动作的模糊值。 为此, 我们还估算在同一优化范围内的相机曝光间隔时间。 为了计算突如反弹的动作变化, 我们把运动轨迹模拟成一个小巧的聚度, 我们可以在子框架精确度上估计弹出弹出具体时间。 对既定的基准数据集进行实验, 表明我们的方法超过了先前的快速移动物体拆落和 3D 重建方法 。