Abstract

We study the problem of efficient semantic segmentation for large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Extensive experiments show that our RandLA-Net can process 1 million points in a single pass with up to 200x faster than existing approaches. Moreover, our RandLA-Net clearly surpasses state-of-the-art approaches for semantic segmentation on two large-scale benchmarks Semantic3D and SemanticKITTI.

Keywords

Point cloudComputer scienceSegmentationSemantics (computer science)Scale (ratio)Sampling (signal processing)Point (geometry)Artificial intelligenceFeature (linguistics)Key (lock)Point processField (mathematics)Net (polyhedron)Pattern recognition (psychology)Data miningComputer visionMathematicsGeography

Affiliated Institutions

Related Publications

Publication Info

Year
2020
Type
article
Citations
1823
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1823
OpenAlex
253
Influential
1514
CrossRef

Cite This

Qingyong Hu, Bo Yang, Linhai Xie et al. (2020). RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . https://doi.org/10.1109/cvpr42600.2020.01112

Identifiers

DOI
10.1109/cvpr42600.2020.01112
arXiv
1911.11236

Data Quality

Data completeness: 84%