We present cross-view transformers, an efficient attention-based model for map-view semantic segmentation from multiple cameras. Our architecture implicitly learns a mapping from individual camera views into a canonical map-view representation using a camera-aware cross-view attention mechanism. Each camera uses positional embeddings that depend on its intrinsic and extrinsic calibration. These embeddings allow a transformer to learn the mapping across different views without ever explicitly modeling it geometrically. The architecture consists of a convolutional image encoder for each view and cross-view transformer layers to infer a map-view semantic segmentation. Our model is simple, easily parallelizable, and runs in real-time. The presented architecture performs at state-of-the-art on the nuScenes dataset, with 4x faster inference speeds. Code is available at https://github.com/bradyz/cross_view_transformers.
翻译:我们展示了跨视图变压器,这是一个从多个摄像头中获取有效关注的地图视图语义分解模型。 我们的架构含蓄地利用一个摄像- 感知交叉关注机制,从单个相机视图中学习映射到一个卡通式地图显示器。 每个相机都使用取决于其内在和外表校准的定位嵌入器。 这些嵌入器允许变压器学习不同视图的映射,而无需以几何方式进行明确的建模。 结构由每个视图和交叉视图变压层的卷变图像编码器组成, 以推断出一个映射- 视图语义分解。 我们的模型简单、 容易平行, 并实时运行。 演示的架构在 Nuscenes 数据集的状态艺术上运行, 其速度为4x 快速的引力速度。 代码可在 https://github.com/bradyz/crosy_ view_ transforations。