Increasing Visual Literacy With Collaborative Foraging, Annotation, Curation, and Critique
Loading...
Links to Files
Author/Creator
Date
2024-12-05
Type of Work
Department
Program
Citation of Original Publication
Williams, Rebecca Marie, Afrin Unnisa Syed, and Krishna Vamsi Kurumaddali. "Increasing Visual Literacy With Collaborative Foraging, Annotation, Curation, and Critique". In Proceedings of the 2024 on ACM Virtual Global Computing Education Conference V. 1, 249–55. SIGCSE Virtual 2024. New York, NY, USA: Association for Computing Machinery, 2024. https://doi.org/10.1145/3649165.3690108.
Rights
Attribution 4.0 International
Subjects
Abstract
Students today are facing information overload, contamination, and bloat from dubious sources: AI-generated content, masqueraded influencer opinions, context-less listicles, and consumer manipulation - frequently heralded by graphs and charts to bolster the argument. Because this information firehose presents as technical visual communications, the overload is both cognitive and perceptual, potentially causing more insidious misperceptions than text alone. In addition to consuming such media, students in computing fields work with data to produce graphs and charts themselves, including assignments, academic research, and personal projects/blog posts/tweets. Depending on visual literacy (VL) and prior data analysis instruction, many students inadvertently code misleading, unethical, or biased visualizations, potentially contributing to the dark corpus already festering online. Prior research on misconceptions in visualization pedagogy suggests students benefit from repeated opportunities to forage, curate and critique examples, discussing and debating with peers and instructors. Inspired by these findings, we incorporated a visual curation + annotation platform into a Data Visualization Computer Science course, enabling students to participate in processes of searching for and curating found examples of misleading visualizations, collaborative annotation + critique of examples, and structured self-evaluation of misleading elements in their own work. We assess our interventions with pre-/post-course Visualization Literacy Assessment Tests, qualitative evaluation of student reflections, taxonomic evaluation of formative student-produced visualizations, and post-course exit surveys. Post-course, students' VL increased significantly, and the number and severity of misleading visualizations they created decreased. Students also reflected that they gained increased confidence in spotting visual disinformation online, and in avoiding its creation in software.