Getting it Right: Improving Spatial Consistency in Text-to-Image Models
dc.contributor.author | Chatterjee, Agneet | |
dc.contributor.author | Stan, Gabriela Ben Melech | |
dc.contributor.author | Aflalo, Estelle | |
dc.contributor.author | Paul, Sayak | |
dc.contributor.author | Ghosh, Dhruba | |
dc.contributor.author | Gokhale, Tejas | |
dc.contributor.author | Schmidt, Ludwig | |
dc.contributor.author | Hajishirzi, Hannaneh | |
dc.contributor.author | Lal, Vasudev | |
dc.contributor.author | Baral, Chitta | |
dc.contributor.author | Yang, Yezhou | |
dc.date.accessioned | 2024-11-14T15:18:36Z | |
dc.date.available | 2024-11-14T15:18:36Z | |
dc.date.issued | 2024-09-30 | |
dc.description | Computer Vision – ECCV 2024, 18th European Conference, Milan, Italy, September 29 – October 4, 2024 | |
dc.description.abstract | One of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt. In this paper, we offer a comprehensive investigation of this limitation, while also developing datasets and methods that support algorithmic solutions to improve spatial reasoning in T2I models. We find that spatial relationships are under-represented in the image descriptions found in current vision-language datasets. To alleviate this data bottleneck, we create SPRIGHT, the first spatially focused, large-scale dataset, by re-captioning 6 million images from 4 widely used vision datasets and through a 3-fold evaluation and analysis pipeline, show that SPRIGHT improves the proportion of spatial relationships in existing datasets. We show the efficacy of SPRIGHT data by showing that using only ~0.25% of SPRIGHT results in a 22% improvement in generating spatially accurate images while also improving FID and CMMD scores. We also find that training on images containing a larger number of objects leads to substantial improvements in spatial consistency, including state-of-the-art results on T2I-CompBench with a spatial score of 0.2133, by fine-tuning on < 500 images. Through a set of controlled experiments and ablations, we document additional findings that could support future work that seeks to understand factors that affect spatial consistency in text-to-image models. Project page: https://spright-t2i.github.io/. | |
dc.description.sponsorship | We thank Lucain Pouget for helping us in uploading the dataset to the Hugging Face Hub and the Hugging Face team for providing computing resources to host our demo. The authors acknowledge resources and support from the Research Computing facilities at Arizona State University. AC, CB, YY were supported by NSF Robust Intelligence program grants #1750082 and #2132724. TG was supported by Microsoft’s Accelerating Foundation Model Research (AFMR) program and UMBC’s Strategic Award for Research Transitions (START). The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers. | |
dc.description.uri | https://link.springer.com/chapter/10.1007/978-3-031-72670-5_12 | |
dc.format.extent | 28 pages | |
dc.genre | conference papers and proceedings | |
dc.genre | postprints | |
dc.identifier | doi:10.13016/m2ollf-6nxp | |
dc.identifier.citation | Chatterjee, Agneet, Gabriela Ben Melech Stan, Estelle Aflalo, Sayak Paul, Dhruba Ghosh, Tejas Gokhale, Ludwig Schmidt, et al. “Getting It Right: Improving Spatial Consistency in Text-to-Image Models.” Edited by Aleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, and Gül Varol. Computer Vision – ECCV 2024, 2025, 204–22. https://doi.org/10.1007/978-3-031-72670-5_12. | |
dc.identifier.uri | https://doi.org/10.1007/978-3-031-72670-5_12 | |
dc.identifier.uri | http://hdl.handle.net/11603/36941 | |
dc.language.iso | en_US | |
dc.publisher | Springer Nature | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
dc.rights | This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-031-72670-5_12 | |
dc.title | Getting it Right: Improving Spatial Consistency in Text-to-Image Models | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0002-5593-2804 |
Files
Original bundle
1 - 1 of 1