Generating Videos with Scene Dynamics
No Thumbnail Available
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2016
Type of Work
Department
Program
Citation of Original Publication
Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, Generating Videos with Scene Dynamics, Advances in Neural Information Processing Systems 29 (NIPS 2016),https://papers.nips.cc/paper/6194-generating-videos-with-scene-dynamics.pdf
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Abstract
We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.