COBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models
dc.contributor.author | Govil, Priyanshul | |
dc.contributor.author | Jain, Hemang | |
dc.contributor.author | Bonagiri, Vamshi Krishna | |
dc.contributor.author | Chadha, Aman | |
dc.contributor.author | Kumaraguru, Ponnurangam | |
dc.contributor.author | Gaur, Manas | |
dc.contributor.author | Dey, Sanorita | |
dc.date.accessioned | 2025-06-17T14:45:38Z | |
dc.date.available | 2025-06-17T14:45:38Z | |
dc.date.issued | 2025-05-20 | |
dc.description | Websci '25 17th ACM Web Science Conference May 20-23, 2025, New Brunswick, New Jersey | |
dc.description.abstract | Large Language Models (LLMs) often inherit biases from the web data they are trained on, which contains stereotypes and prejudices. Current methods for evaluating and mitigating these biases rely on bias-benchmark datasets. These benchmarks measure bias by observing an LLM’s behavior on biased statements. However, these statements lack contextual considerations of the situations they try to present. To address this, we introduce a contextual reliability framework, which evaluates model robustness to biased statements by considering the various contexts in which they may appear. We develop the Context-Oriented Bias Indicator and Assessment Score (COBIAS) to measure a biased statement’s reliability in detecting bias, based on the variance in model behavior across different contexts. To evaluate the metric, we augmented 2,291 stereotyped statements from two existing benchmark datasets by adding contextual information. We show that COBIAS aligns with human judgment on the contextual reliability of biased statements (Spearman’s p = 0.65, p = 3.4*10⁻⁶⁰) and can be used to create reliable benchmarks, which would assist bias mitigation works. Our data and code are publicly available. Warning: Some examples in this paper may be offensive or upsetting. | |
dc.description.sponsorship | We thank Arya Topale and Sanchit Jalan for their help in metric validation. Finally, we thank UMBC and iHUB - IIIT Hyderabad for financially supporting this project. | |
dc.description.uri | https://dl.acm.org/doi/10.1145/3717867.3717923 | |
dc.format.extent | 12 pages | |
dc.genre | conference papers and proceedings | |
dc.identifier | doi:10.13016/m2oqtk-p6di | |
dc.identifier.citation | Priyanshul Govil et al., “COBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models,” in Proceedings of the 17th ACM Web Science Conference 2025, Websci ’25 (New York, NY, USA: Association for Computing Machinery, 2025), 460–71, https://doi.org/10.1145/3717867.3717923. | |
dc.identifier.uri | https://doi.org/10.1145/3717867.3717923 | |
dc.identifier.uri | http://hdl.handle.net/11603/38923 | |
dc.language.iso | en_US | |
dc.publisher | ACM | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author. | |
dc.subject | Framework | |
dc.subject | Human-centered computing | |
dc.subject | Bias Benchmark | |
dc.subject | Metric | |
dc.subject | Contextual Reliability | |
dc.subject | Collaborative and social computing | |
dc.subject | Context-Oriented Bias Indicator and Assessment Score (COBIAS) | |
dc.subject | Natural language processing | |
dc.subject | Computing methodologies | |
dc.subject | Large Language Models (LLMs) | |
dc.subject | Language Model | |
dc.subject | UMBC Ebiquity Research Group | |
dc.subject | Stereotype | |
dc.title | COBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0003-3346-5886 | |
dcterms.creator | https://orcid.org/0000-0002-5411-2230 |
Files
Original bundle
1 - 1 of 1