COBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models

dc.contributor.authorGovil, Priyanshul
dc.contributor.authorJain, Hemang
dc.contributor.authorBonagiri, Vamshi Krishna
dc.contributor.authorChadha, Aman
dc.contributor.authorKumaraguru, Ponnurangam
dc.contributor.authorGaur, Manas
dc.contributor.authorDey, Sanorita
dc.date.accessioned2025-06-17T14:45:38Z
dc.date.available2025-06-17T14:45:38Z
dc.date.issued2025-05-20
dc.descriptionWebsci '25 17th ACM Web Science Conference May 20-23, 2025, New Brunswick, New Jersey
dc.description.abstractLarge Language Models (LLMs) often inherit biases from the web data they are trained on, which contains stereotypes and prejudices. Current methods for evaluating and mitigating these biases rely on bias-benchmark datasets. These benchmarks measure bias by observing an LLM’s behavior on biased statements. However, these statements lack contextual considerations of the situations they try to present. To address this, we introduce a contextual reliability framework, which evaluates model robustness to biased statements by considering the various contexts in which they may appear. We develop the Context-Oriented Bias Indicator and Assessment Score (COBIAS) to measure a biased statement’s reliability in detecting bias, based on the variance in model behavior across different contexts. To evaluate the metric, we augmented 2,291 stereotyped statements from two existing benchmark datasets by adding contextual information. We show that COBIAS aligns with human judgment on the contextual reliability of biased statements (Spearman’s p = 0.65, p = 3.4*10⁻⁶⁰) and can be used to create reliable benchmarks, which would assist bias mitigation works. Our data and code are publicly available. Warning: Some examples in this paper may be offensive or upsetting.
dc.description.sponsorshipWe thank Arya Topale and Sanchit Jalan for their help in metric validation. Finally, we thank UMBC and iHUB - IIIT Hyderabad for financially supporting this project.
dc.description.urihttps://dl.acm.org/doi/10.1145/3717867.3717923
dc.format.extent12 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m2oqtk-p6di
dc.identifier.citationPriyanshul Govil et al., “COBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models,” in Proceedings of the 17th ACM Web Science Conference 2025, Websci ’25 (New York, NY, USA: Association for Computing Machinery, 2025), 460–71, https://doi.org/10.1145/3717867.3717923.
dc.identifier.urihttps://doi.org/10.1145/3717867.3717923
dc.identifier.urihttp://hdl.handle.net/11603/38923
dc.language.isoen_US
dc.publisherACM
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectFramework
dc.subjectHuman-centered computing
dc.subjectBias Benchmark
dc.subjectMetric
dc.subjectContextual Reliability
dc.subjectCollaborative and social computing
dc.subjectContext-Oriented Bias Indicator and Assessment Score (COBIAS)
dc.subjectNatural language processing
dc.subjectComputing methodologies
dc.subjectLarge Language Models (LLMs)
dc.subjectLanguage Model
dc.subjectUMBC Ebiquity Research Group
dc.subjectStereotype
dc.titleCOBIAS: Assessing the Contextual Reliability of Bias Benchmarks for Language Models
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0003-3346-5886
dcterms.creatorhttps://orcid.org/0000-0002-5411-2230

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2402.14889v4.pdf
Size:
7.54 MB
Format:
Adobe Portable Document Format