Current work on lane detection relies on large manually annotated datasets. We reduce the dependency on annotations by leveraging massive cheaply available unlabelled data. We propose a novel loss function exploiting geometric knowledge of lanes in Hough space, where a lane can be identified as a local maximum. By splitting lanes into separate channels, we can localize each lane via simple global max-pooling. The location of the maximum encodes the layout of a lane, while the intensity indicates the the probability of a lane being present. Maximizing the log-probability of the maximal bins helps neural networks find lanes without labels. On the CULane and TuSimple datasets, we show that the proposed Hough Transform loss improves performance significantly by learning from large amounts of unlabelled images.
翻译:目前有关车道探测的工作依赖于大型人工附加说明的数据集。 我们通过利用大量廉价的、无标签的数据来减少对说明的依赖。 我们提出一种新的损失功能,利用Hough空间车道的几何知识,该车道可以被确定为本地最大。 通过将车道分成不同的通道,我们可以通过简单的全球最大集合将车道定位为每个车道。 车道设计的最大编码位置, 而车道布局的强度则表明车道存在的概率。 最大限度地扩大车道的最大记录概率有助于神经网络找到没有标签的车道。 在CULane和TuSuple数据集上,我们显示,拟议的轮式损失通过学习大量无标签图像,大大提高了工作绩效。