On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation

dc.contributor.authorChatterjee, Agneet
dc.contributor.authorGokhale, Tejas
dc.contributor.authorBaral, Chitta
dc.contributor.authorYang, Yezhou
dc.date.accessioned2024-05-13T19:11:18Z
dc.date.available2024-05-13T19:11:18Z
dc.date.issued2024-09-16
dc.description2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16-22 June 2024, Seattle, WA, USA
dc.description.abstractRecent advances in monocular depth estimation have been made by incorporating natural language as additional guidance. Although yielding impressive results, the impact of the language prior, particularly in terms of generalization and robustness, remains unexplored. In this paper, we address this gap by quantifying the impact of this prior and introduce methods to benchmark its effectiveness across various settings. We generate "low-level" sentences that convey object-centric, three-dimensional spatial relationships, incorporate them as additional language priors and evaluate their downstream impact on depth estimation. Our key finding is that current language-guided depth estimators perform optimally only with scene-level descriptions and counter-intuitively fare worse with low level descriptions. Despite leveraging additional data, these methods are not robust to directed adversarial attacks and decline in performance with an increase in distribution shift. Finally, to provide a foundation for future research, we identify points of failures and offer insights to better understand these shortcomings. With an increasing number of methods using language for depth estimation, our findings highlight the opportunities and pitfalls that require careful consideration for effective deployment in real-world settings.
dc.description.sponsorshipThe authors acknowledge Research Computing at Arizona State University for providing HPC resources and support for this work. This work was supported by NSF RI grants #1750082 and #2132724. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.
dc.description.urihttps://ieeexplore.ieee.org/document/10657342
dc.format.extent14 pages
dc.genreconference papers and proceedings
dc.genrepostprints
dc.identifierdoi:10.1109/CVPR52733.2024.00270
dc.identifier.citationChatterjee, Agneet, Tejas Gokhale, Chitta Baral, and Yezhou Yang. “On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation.” 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, 2794–2803. https://doi.org/10.1109/CVPR52733.2024.00270.
dc.identifier.urihttps://doi.org/10.1109/CVPR52733.2024.00270
dc.identifier.urihttp://hdl.handle.net/11603/33955
dc.language.isoen_US
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.rights© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.subjectComputer Science - Computer Vision and Pattern Recognition
dc.titleOn the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation
dc.typeText

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2404.08540v1.pdf
Size:
5.61 MB
Format:
Adobe Portable Document Format