Parsing videos of actions with segmental grammars
Loading...
Links to Files
Author/Creator
Author/Creator ORCID
Date
2014-06-28
Type of Work
Department
Program
Citation of Original Publication
Hamed Pirsiavash, Deva Ramanan , Parsing videos of actions with segmental grammars, 2014 IEEE Conference on Computer Vision and Pattern Recognition, DOI: 10.1109/CVPR.2014.85
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2014 IEEE
© 2014 IEEE
Abstract
Real-world videos of human activities exhibit temporal structure at various scales, long videos are typically composed out of multiple action instances, where each instance is itself composed of sub-actions with variable durations and orderings. Temporal grammars can presumably model such hierarchical structure, but are computationally difficult to apply for long video streams. We describe simple grammars that capture hierarchical temporal structure while admitting inference with a finite-state-machine. This makes parsing linear time, constant storage, and naturally online. We train grammar parameters using a latent structural SVM, where latent subactions are learned automatically. We illustrate the effectiveness of our approach over common baselines on a new half-million frame dataset of continuous YouTube videos.