Cross-Modal Scene Networks

Author/Creator ORCID

Date

2017-09-18

Department

Program

Citation of Original Publication

Yusuf Aytar, et.al, Cross-Modal Scene Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume: 40 , Issue: 10 , Oct. 1 2018 , DOI: 10.1109/TPAMI.2017.2753232

Rights

This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
© 2017 IEEE

Abstract

People can recognize scenes across many different modalities beyond natural images. In this paper, we investigate how to learn cross-modal scene representations that transfer across modalities. To study this problem, we introduce a new cross-modal scene dataset. While convolutional neural networks can categorize scenes well, they also learn an intermediate representation not aligned across modalities, which is undesirable for cross-modal transfer applications. We present methods to regularize cross-modal convolutional neural networks so that they have a shared representation that is agnostic of the modality. Our experiments suggest that our scene representation can help transfer representations across modalities for retrieval. Moreover, our visualizations suggest that units emerge in the shared representation that tend to activate on consistent concepts independently of the modality.