In Context Learning and Reasoning for Symbolic Regression with Large Language Models
dc.contributor.author | Sharlin, Samiha | |
dc.contributor.author | Josephson, Tyler R. | |
dc.date.accessioned | 2024-12-11T17:02:45Z | |
dc.date.available | 2024-12-11T17:02:45Z | |
dc.date.issued | 2024-10-22 | |
dc.description.abstract | Large Language Models (LLMs) are transformer-based machine learning models that have shown remarkable performance in tasks for which they were not explicitly trained. Here, we explore the potential of LLMs to perform symbolic regression -- a machine-learning method for finding simple and accurate equations from datasets. We prompt GPT-4 to suggest expressions from data, which are then optimized and evaluated using external Python tools. These results are fed back to GPT-4, which proposes improved expressions while optimizing for complexity and loss. Using chain-of-thought prompting, we instruct GPT-4 to analyze the data, prior expressions, and the scientific context (expressed in natural language) for each problem before generating new expressions. We evaluated the workflow in rediscovery of five well-known scientific equations from experimental data, and on an additional dataset without a known equation. GPT-4 successfully rediscovered all five equations, and in general, performed better when prompted to use a scratchpad and consider scientific context. We also demonstrate how strategic prompting improves the model's performance and how the natural language interface simplifies integrating theory with data. Although this approach does not outperform established SR programs where target equations are more complex, LLMs can nonetheless iterate toward improved solutions while following instructions and incorporating scientific context in natural language. | |
dc.description.sponsorship | We thank Roger Guimerà for sharing the detailed results of all models on Nikuradse dataset. This material is based upon work supported by the National Science Foundation under Grant No. #2138938. | |
dc.description.uri | http://arxiv.org/abs/2410.17448 | |
dc.format.extent | 21 pages | |
dc.genre | journal articles | |
dc.genre | preprints | |
dc.identifier | doi:10.13016/m2itrd-boje | |
dc.identifier.uri | https://doi.org/10.48550/arXiv.2410.17448 | |
dc.identifier.uri | http://hdl.handle.net/11603/37103 | |
dc.language.iso | en_US | |
dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
dc.relation.ispartof | UMBC Chemical, Biochemical & Environmental Engineering Department | |
dc.relation.ispartof | UMBC Student Collection | |
dc.relation.ispartof | UMBC Computer Science and Electrical Engineering Department | |
dc.relation.ispartof | UMBC Faculty Collection | |
dc.rights | Attribution 4.0 International CC BY 4.0 | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Computer Science - Artificial Intelligence | |
dc.subject | Computer Science - Computation and Language | |
dc.title | In Context Learning and Reasoning for Symbolic Regression with Large Language Models | |
dc.type | Text | |
dcterms.creator | https://orcid.org/0000-0002-6379-9206 | |
dcterms.creator | https://orcid.org/0000-0002-0100-0227 |
Files
Original bundle
1 - 1 of 1