Pirsiavash, HamedRamanan, Deva2019-06-282019-06-282014-06-28Hamed Pirsiavash, Deva Ramanan , Parsing videos of actions with segmental grammars, 2014 IEEE Conference on Computer Vision and Pattern Recognition, DOI: 10.1109/CVPR.2014.85https://doi.org/10.1109/CVPR.2014.85http://hdl.handle.net/11603/14318Real-world videos of human activities exhibit temporal structure at various scales, long videos are typically composed out of multiple action instances, where each instance is itself composed of sub-actions with variable durations and orderings. Temporal grammars can presumably model such hierarchical structure, but are computationally difficult to apply for long video streams. We describe simple grammars that capture hierarchical temporal structure while admitting inference with a finite-state-machine. This makes parsing linear time, constant storage, and naturally online. We train grammar parameters using a latent structural SVM, where latent subactions are learned automatically. We illustrate the effectiveness of our approach over common baselines on a new half-million frame dataset of continuous YouTube videos.8 pagesen-USThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.© 2014 IEEEGrammarVideosHidden Markov modelsData modelsPressesMarkov processesfinite state machinessupport vector machinesimage segmentationlatent subactionslatent structural SVMParsing videos of actions with segmental grammarsText