Abstract

Perception in autonomous vehicles is often carried out through a suite of different sensing modalities. Given the massive amount of openly available labeled RGB data and the advent of high-quality deep learning algorithms for image-based recognition, high-level semantic perception tasks are pre-dominantly solved using high-resolution cameras. As a result of that, other sensor modalities potentially useful for this task are often ignored. In this paper, we push the state of the art in LiDAR-only semantic segmentation forward in order to provide another independent source of semantic information to the vehicle. Our approach can accurately perform full semantic segmentation of LiDAR point clouds at sensor frame rate. We exploit range images as an intermediate representation in combination with a Convolutional Neural Network (CNN) exploiting the rotating LiDAR sensor model. To obtain accurate results, we propose a novel post-processing algorithm that deals with problems arising from this intermediate representation such as discretization errors and blurry CNN outputs. We implemented and thoroughly evaluated our approach including several comparisons to the state of the art. Our experiments show that our approach outperforms state-of-the-art approaches, while still running online on a single embedded GPU. The code can be accessed at https://github.com/PRBonn/lidar-bonnetal.

Keywords

Computer scienceLidarArtificial intelligenceSegmentationConvolutional neural networkPoint cloudFrame rateRepresentation (politics)Computer visionDeep learningRGB color modelCode (set theory)RangingSemantics (computer science)Remote sensing

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Pages
4213-4220
Citations
1117
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1117
OpenAlex

Cite This

Andres Milioto, Ignacio Vizzo, Jens Behley et al. (2019). RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation. , 4213-4220. https://doi.org/10.1109/iros40897.2019.8967762

Identifiers

DOI
10.1109/iros40897.2019.8967762