With the advancement in computer vision deep learning, systems now are able to analyze an unprecedented amount of rich visual information from videos to enable applications such as autonomous driving, socially-aware robot assistant and public safety monitoring. Deciphering human behaviors to predict their future paths/trajectories and what they would do from videos is important in these applications. However, human trajectory prediction still remains a challenging task, as scene semantics and human intent are difficult to model. Many systems do not provide high-level semantic attributes to reason about pedestrian future. This design hinders prediction performance in video data from diverse domains and unseen scenarios. To enable optimal future human behavioral forecasting, it is crucial for the system to be able to detect and analyze human activities as well as scene semantics, passing informative features to the subsequent prediction module for context understanding.
翻译:随着计算机视野深层学习的进步,各系统现在能够分析出来自视频的前所未有的大量丰富的视觉信息,以便能够应用诸如自主驾驶、有社会意识的机器人助理和公共安全监测等应用。在这些应用中,破解人类行为以预测其未来路径/轨迹以及他们将如何用视频来做是重要的。然而,人类轨迹预测仍然是一个具有挑战性的任务,因为现场语义和人类意图难以建模。许多系统不为行人未来提供高层次的语义属性。这种设计阻碍了从不同领域和不为人知的场景中预测视频数据的性能。为使人类未来行为预测最优化,系统能够探测和分析人类活动以及场景语义至关重要,它将信息特性传递到随后的预测模块,以便了解背景。