GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
dc.contributor.author | Zhang, Tao | |
dc.contributor.author | Zeng, Ziqian | |
dc.contributor.author | Xiao, Yuxiang | |
dc.contributor.author | Zhuang, Huiping | |
dc.contributor.author | Chen, Cen | |
dc.contributor.author | Foulds, James | |
dc.contributor.author | Pan, Shimei | |
dc.date.accessioned | 2025-02-13T17:56:14Z | |
dc.date.available | 2025-02-13T17:56:14Z | |
dc.date.issued | 2024-12-16 | |
dc.description.abstract | Large Language Models (LLMs) are prone to generating content that exhibits gender biases, raising significant ethical concerns. Alignment, the process of fine-tuning LLMs to better align with desired behaviors, is recognized as an effective approach to mitigate gender biases. Although proprietary LLMs have made significant strides in mitigating gender bias, their alignment datasets are not publicly available. The commonly used and publicly available alignment dataset, HH-RLHF, still exhibits gender bias to some extent. There is a lack of publicly available alignment datasets specifically designed to address gender bias. Hence, we developed a new dataset named GenderAlign, aiming at mitigating a comprehensive set of gender biases in LLMs. This dataset comprises 8k single-turn dialogues, each paired with a "chosen" and a "rejected" response. Compared to the "rejected" responses, the "chosen" responses demonstrate lower levels of gender bias and higher quality. Furthermore, we categorized the gender biases in the "rejected" responses of GenderAlign into 4 principal categories. The experimental results show the effectiveness of GenderAlign in reducing gender bias in LLMs. | |
dc.description.uri | http://arxiv.org/abs/2406.13925 | |
dc.format.extent | 17 pages | |
dc.genre | journal articles | |
dc.genre | preprints | |
dc.identifier | doi:10.13016/m2lani-bx9l | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2406.13925 | |
dc.identifier.uri | http://hdl.handle.net/11603/37704 | |
dc.language.iso | en_US | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Information Systems Department | |
dc.relation.ispartof | UMBC College of Engineering and Information Technology Dean's Office | |
dc.relation.ispartof | UMBC Information Systems Department | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | Attribution 4.0 International CC BY 4.0 Deed | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Computer Science - Computation and Language | |
dc.subject | Computer Science - Artificial Intelligence | |
dc.title | GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0003-0935-4182 | |
dcterms.creator | https://orcid.org/0000-0002-5989-8543 |
Files
Original bundle
1 - 1 of 1