Abstract

This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.

Keywords

Artificial intelligenceObject (grammar)Computer scienceExploitSegmentationDomain (mathematical analysis)Partition (number theory)Cognitive neuroscience of visual object recognitionComputer visionLearning objectFrame (networking)Pattern recognition (psychology)Machine learningMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2011
Type
article
Citations
534
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

534
OpenAlex

Cite This

Alireza Fathi, Xiaofeng Ren, James M. Rehg (2011). Learning to recognize objects in egocentric activities. . https://doi.org/10.1109/cvpr.2011.5995444

Identifiers

DOI
10.1109/cvpr.2011.5995444