Learning Aligned Cross-Modal Representations from Weakly Aligned Data
dc.contributor.author | Castrejón, Lluís | |
dc.contributor.author | Aytar, Yusuf | |
dc.contributor.author | Vondrick, Carl | |
dc.contributor.author | Pirsiavash, Hamed | |
dc.contributor.author | Torralba, Antonio | |
dc.date.accessioned | 2019-07-01T14:14:36Z | |
dc.date.available | 2019-07-01T14:14:36Z | |
dc.date.issued | 2016-06-30 | |
dc.description | 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). | en_US |
dc.description.abstract | People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize cross-modal scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for crossmodal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality. | en_US |
dc.description.sponsorship | This work was supported by NSF grant IIS-1524817, by a Google faculty research award to A.T and by a Google Ph.D. fellowship to C.V. | en_US |
dc.description.uri | https://ieeexplore.ieee.org/document/7780690 | en_US |
dc.format.extent | 10 pages | en_US |
dc.genre | conference papers and proceedings preprints | en_US |
dc.identifier | doi:10.13016/m2dnjv-coqd | |
dc.identifier.citation | Lluís Castrejón, et.al, Learning Aligned Cross-Modal Representations from Weakly Aligned Data, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR.2016.321 | en_US |
dc.identifier.uri | https://doi.org/10.1109/CVPR.2016.321 | |
dc.identifier.uri | http://hdl.handle.net/11603/14321 | |
dc.language.iso | en_US | en_US |
dc.publisher | IEEE | en_US |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department Collection | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
dc.rights | © 2016 IEEE | |
dc.subject | image representation | en_US |
dc.subject | neural nets | en_US |
dc.subject | convolutional neural networks | en_US |
dc.subject | weakly aligned data | en_US |
dc.subject | aligned cross-modal scene representations | en_US |
dc.subject | image recognition | en_US |
dc.subject | Data models | en_US |
dc.subject | Automobiles | en_US |
dc.title | Learning Aligned Cross-Modal Representations from Weakly Aligned Data | en_US |
dc.type | Text | en_US |