Abstract

In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild.” Such unconstrained videos are abundant in personal collections as well as on the web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.

Keywords

Computer scienceDiscriminative modelArtificial intelligenceMotion (physics)Pattern recognition (psychology)AdaBoostClutterObject (grammar)Feature extractionConstruct (python library)Action (physics)Computer visionMachine learningHistogramSupport vector machineImage (mathematics)

Affiliated Institutions

Related Publications

Publication Info

Year
2009
Type
article
Citations
1072
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1072
OpenAlex

Cite This

Jingen Liu, Jiebo Luo, Mubarak Shah (2009). Recognizing realistic actions from videos “in the wild”. 2009 IEEE Conference on Computer Vision and Pattern Recognition . https://doi.org/10.1109/cvpr.2009.5206744

Identifiers

DOI
10.1109/cvpr.2009.5206744