Getting it Right: Improving Spatial Consistency in Text-to-Image Models

dc.contributor.authorChatterjee, Agneet
dc.contributor.authorStan, Gabriela Ben Melech
dc.contributor.authorAflalo, Estelle
dc.contributor.authorPaul, Sayak
dc.contributor.authorGhosh, Dhruba
dc.contributor.authorGokhale, Tejas
dc.contributor.authorSchmidt, Ludwig
dc.contributor.authorHajishirzi, Hannaneh
dc.contributor.authorLal, Vasudev
dc.contributor.authorBaral, Chitta
dc.contributor.authorYang, Yezhou
dc.date.accessioned2024-11-14T15:18:36Z
dc.date.available2024-11-14T15:18:36Z
dc.date.issued2024-09-30
dc.descriptionComputer Vision – ECCV 2024, 18th European Conference, Milan, Italy, September 29 – October 4, 2024
dc.description.abstractOne of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt. In this paper, we offer a comprehensive investigation of this limitation, while also developing datasets and methods that support algorithmic solutions to improve spatial reasoning in T2I models. We find that spatial relationships are under-represented in the image descriptions found in current vision-language datasets. To alleviate this data bottleneck, we create SPRIGHT, the first spatially focused, large-scale dataset, by re-captioning 6 million images from 4 widely used vision datasets and through a 3-fold evaluation and analysis pipeline, show that SPRIGHT improves the proportion of spatial relationships in existing datasets. We show the efficacy of SPRIGHT data by showing that using only ~0.25% of SPRIGHT results in a 22% improvement in generating spatially accurate images while also improving FID and CMMD scores. We also find that training on images containing a larger number of objects leads to substantial improvements in spatial consistency, including state-of-the-art results on T2I-CompBench with a spatial score of 0.2133, by fine-tuning on < 500 images. Through a set of controlled experiments and ablations, we document additional findings that could support future work that seeks to understand factors that affect spatial consistency in text-to-image models. Project page: https://spright-t2i.github.io/.
dc.description.sponsorshipWe thank Lucain Pouget for helping us in uploading the dataset to the Hugging Face Hub and the Hugging Face team for providing computing resources to host our demo. The authors acknowledge resources and support from the Research Computing facilities at Arizona State University. AC, CB, YY were supported by NSF Robust Intelligence program grants #1750082 and #2132724. TG was supported by Microsoft’s Accelerating Foundation Model Research (AFMR) program and UMBC’s Strategic Award for Research Transitions (START). The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.
dc.description.urihttps://link.springer.com/chapter/10.1007/978-3-031-72670-5_12
dc.format.extent28 pages
dc.genreconference papers and proceedings
dc.genrepostprints
dc.identifierdoi:10.13016/m2ollf-6nxp
dc.identifier.citationChatterjee, Agneet, Gabriela Ben Melech Stan, Estelle Aflalo, Sayak Paul, Dhruba Ghosh, Tejas Gokhale, Ludwig Schmidt, et al. “Getting It Right: Improving Spatial Consistency in Text-to-Image Models.” Edited by Aleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, and Gül Varol. Computer Vision – ECCV 2024, 2025, 204–22. https://doi.org/10.1007/978-3-031-72670-5_12.
dc.identifier.urihttps://doi.org/10.1007/978-3-031-72670-5_12
dc.identifier.urihttp://hdl.handle.net/11603/36941
dc.language.isoen_US
dc.publisherSpringer Nature
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.rightsThis version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-031-72670-5_12
dc.titleGetting it Right: Improving Spatial Consistency in Text-to-Image Models
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-5593-2804

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2404.01197v2.pdf
Size:
4.94 MB
Format:
Adobe Portable Document Format