Abstract

This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.

Keywords

PoseComputer scienceArtificial intelligencePartition (number theory)Joint (building)Set (abstract data type)Task (project management)EstimationLinear programmingComputer visionPattern recognition (psychology)Object detectionMachine learningAlgorithmMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2016
Type
article
Citations
1069
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1069
OpenAlex

Cite This

Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang et al. (2016). DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation. . https://doi.org/10.1109/cvpr.2016.533

Identifiers

DOI
10.1109/cvpr.2016.533