Vondrick, CarlOktay, DenizPirsiavash, HamedTorralba, Antonio2019-07-012019-07-012016-12-12Carl Vondrick, et.al, Predicting Motivations of Actions by Leveraging Text, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR.2016.327https://doi.org/10.1109/CVPR.2016.327http://hdl.handle.net/11603/14329Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.9 pagesen-USThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.© 2016 IEEEcomputer visiondata miningimage annotationimage recognitionnatural language processingaction motivation predictionknowledge mininghuman activity understandingPredicting Motivations of Actions by Leveraging TextText