Abstract

A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose.

Keywords

PoseArtificial intelligenceComputer scienceComputer visionLeverage (statistics)RGB color model3D pose estimationEmbeddingIterative closest pointPattern recognition (psychology)Point cloud

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Pages
3338-3347
Citations
1063
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1063
OpenAlex

Cite This

Chen Wang, Danfei Xu, Yuke Zhu et al. (2019). DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion. , 3338-3347. https://doi.org/10.1109/cvpr.2019.00346

Identifiers

DOI
10.1109/cvpr.2019.00346