Abstract

Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https://github.com/JiaRenChang/PSMNet.

Keywords

Pyramid (geometry)PoolingComputer scienceContext (archaeology)Artificial intelligenceBenchmark (surveying)Convolutional neural networkMatching (statistics)ExploitTask (project management)Pattern recognition (psychology)Computer visionMachine learningMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
preprint
Pages
5410-5418
Citations
1686
Access
Closed

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

1686
OpenAlex
424
Influential
1401
CrossRef

Cite This

Jia-Ren Chang, Yong‐Sheng Chen (2018). Pyramid Stereo Matching Network. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , 5410-5418. https://doi.org/10.1109/cvpr.2018.00567

Identifiers

DOI
10.1109/cvpr.2018.00567
arXiv
1803.08669

Data Quality

Data completeness: 84%