Grounding Synthetic Data Evaluations of Language Models in Unsupervised Document Corpora
dc.contributor.author | Majurski, Michael | |
dc.contributor.author | Matuszek, Cynthia | |
dc.date.accessioned | 2025-06-17T14:45:18Z | |
dc.date.available | 2025-06-17T14:45:18Z | |
dc.date.issued | 2025-05-16 | |
dc.description.abstract | Language Models (LMs) continue to advance, improving response quality and coherence. Given Internet-scale training datasets, LMs have likely encountered much of what users may ask them to generate in some form during their training. A plethora of evaluation benchmarks have been constructed to assess model quality, response appropriateness, and reasoning capabilities. However, the human effort required for benchmark construction is rapidly being outpaced by the size and scope of the models under evaluation. Having humans build a benchmark for every possible domain of interest is impractical. Therefore, we propose a methodology for automating the construction of fact-based synthetic data model evaluations grounded in document populations. This work leverages the same LMs to evaluate domain-specific knowledge automatically, using only grounding documents (e.g., a textbook) as input. This synthetic data benchmarking approach corresponds well with human curated questions producing a Spearman ranking correlation of 0.97 and a benchmark evaluation Pearson accuracy correlation of 0.75. This novel approach supports generating both multiple choice and open-ended synthetic data questions to gain diagnostic insight of LM capability. We apply this methodology to evaluate model performance on two recent arXiv preprints, discovering a surprisingly strong performance from Gemma-3 models on open-ended questions. Code is available at https://github.com/mmajurski/grounded-synth-lm-benchmark | |
dc.description.uri | http://arxiv.org/abs/2505.08905 | |
dc.format.extent | 21 pages | |
dc.genre | journal articles | |
dc.genre | preprints | |
dc.identifier | doi:10.13016/m2udba-jp2x | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2505.08905 | |
dc.identifier.uri | http://hdl.handle.net/11603/38875 | |
dc.language.iso | en_US | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.relation.ispartof | UMBC Student Collection | |
dc.relation.ispartof | UMBC Information Systems Department | |
dc.rights | Attribution 4.0 International | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Computer Science - Computation and Language | |
dc.subject | Computer Science - Artificial Intelligence | |
dc.subject | UMBC Interactive Robotics and Language Lab | |
dc.subject | UMBC Interactive Robotics and Language Lab (IRAL Lab) | |
dc.title | Grounding Synthetic Data Evaluations of Language Models in Unsupervised Document Corpora | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0003-1383-8120 | |
dcterms.creator | https://orcid.org/0000-0001-9663-3803 |
Files
Original bundle
1 - 1 of 1