End-to-end Knowledge Retrieval with Multi-modal Queries

dc.contributor.authorLuo, Man
dc.contributor.authorFang, Zhiyuan
dc.contributor.authorGokhale, Tejas
dc.contributor.authorYang, Yezhou
dc.contributor.authorBaral, Chitta
dc.date.accessioned2024-02-27T19:20:49Z
dc.date.available2024-02-27T19:20:49Z
dc.date.issued2023-07
dc.descriptionProceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Toronto, Canada July 9-14, 2023
dc.description.abstractWe investigate knowledge retrieval with multi-modal queries, i.e. queries containing information split across image and text inputs, a challenging task that differs from previous work on cross-modal retrieval. We curate a new dataset called ReMuQ for benchmarking progress on this task. ReMuQ requires a system to retrieve knowledge from a large corpus by integrating contents from both text and image queries. We introduce a retriever model “ReViz” that can directly process input text and images to retrieve relevant knowledge in an end-to-end fashion without being dependent on intermediate modules such as object detectors or caption generators. We introduce a new pretraining task that is effective for learning knowledge retrieval with multimodal queries and also improves performance on downstream tasks. We demonstrate superior performance in retrieval on two datasets (ReMuQ and OK-VQA) under zero-shot settings as well as further improvements when finetuned on these datasets.
dc.description.sponsorshipThis work was supported by grants from National Science Foundation #1816039 and #2132724 and DARPA W911NF2020006. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.
dc.description.urihttps://aclanthology.org/2023.acl-long.478/
dc.format.extent17 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m2qwx6-xauo
dc.identifier.citationMan Luo, Zhiyuan Fang, Tejas Gokhale, Yezhou Yang, and Chitta Baral. 2023. End-to-end Knowledge Retrieval with Multi-modal Queries. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8573–8589, Toronto, Canada. Association for Computational Linguistics.
dc.identifier.urihttps://doi.org/10.18653/v1/2023.acl-long.478
dc.identifier.urihttp://hdl.handle.net/11603/31718
dc.publisherACL
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.rightsCreative Commons Attribution 4.0 International (CC BY 4.0)en
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleEnd-to-end Knowledge Retrieval with Multi-modal Queries
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5593-2804

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2023.acl-long.478.pdf
Size:
4.03 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: