Abstract

Object detection in 3D with stereo cameras is an important problem in computer vision, and is particularly crucial in low-cost autonomous mobile robots without LiDARs. Nowadays, most of the best-performing frameworks for stereo 3D object detection are based on dense depth reconstruction from disparity estimation, making them extremely computationally expensive. To enable real-world deployments of vision detection with binocular images, we take a step back to gain insights from 2D image-based detection frameworks and enhance them with stereo features. We incorporate knowledge and the inference structure from real-time one-stage 2D/3D object detector and introduce a light-weight stereo matching module. Our proposed framework, YOLOStereo3D, is trained on one single GPU and runs at more than ten fps. It demonstrates performance comparable to state-of-the-art stereo 3D detection frameworks without usage of LiDAR data. The code will be published in https://github.com/Owen-Liuyuxuan/visualDet3D. © 2021 IEEE

Keywords

Artificial intelligenceComputer scienceComputer visionObject detectionLidarStereopsisStereo camerasObject (grammar)Computer stereo visionMatching (statistics)RobotMobile robotDetectorCode (set theory)InferencePattern recognition (psychology)

Affiliated Institutions

Related Publications

Publication Info

Year
2021
Type
article
Pages
13018-13024
Citations
71
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

71
OpenAlex

Cite This

Yuxuan Liu, Lujia Wang, Ming Liu (2021). YOLOStereo3D: A Step Back to 2D for Efficient Stereo 3D Detection. , 13018-13024. https://doi.org/10.1109/icra48506.2021.9561423

Identifiers

DOI
10.1109/icra48506.2021.9561423