In Context Learning and Reasoning for Symbolic Regression with Large Language Models

dc.contributor.authorSharlin, Samiha
dc.contributor.authorJosephson, Tyler R.
dc.date.accessioned2024-12-11T17:02:45Z
dc.date.available2024-12-11T17:02:45Z
dc.date.issued2024-10-22
dc.description.abstractLarge Language Models (LLMs) are transformer-based machine learning models that have shown remarkable performance in tasks for which they were not explicitly trained. Here, we explore the potential of LLMs to perform symbolic regression -- a machine-learning method for finding simple and accurate equations from datasets. We prompt GPT-4 to suggest expressions from data, which are then optimized and evaluated using external Python tools. These results are fed back to GPT-4, which proposes improved expressions while optimizing for complexity and loss. Using chain-of-thought prompting, we instruct GPT-4 to analyze the data, prior expressions, and the scientific context (expressed in natural language) for each problem before generating new expressions. We evaluated the workflow in rediscovery of five well-known scientific equations from experimental data, and on an additional dataset without a known equation. GPT-4 successfully rediscovered all five equations, and in general, performed better when prompted to use a scratchpad and consider scientific context. We also demonstrate how strategic prompting improves the model's performance and how the natural language interface simplifies integrating theory with data. Although this approach does not outperform established SR programs where target equations are more complex, LLMs can nonetheless iterate toward improved solutions while following instructions and incorporating scientific context in natural language.
dc.description.sponsorshipWe thank Roger Guimerà for sharing the detailed results of all models on Nikuradse dataset. This material is based upon work supported by the National Science Foundation under Grant No. #2138938.
dc.description.urihttp://arxiv.org/abs/2410.17448
dc.format.extent21 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m2itrd-boje
dc.identifier.urihttps://doi.org/10.48550/arXiv.2410.17448
dc.identifier.urihttp://hdl.handle.net/11603/37103
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Chemical, Biochemical & Environmental Engineering Department
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsAttribution 4.0 International CC BY 4.0
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectComputer Science - Artificial Intelligence
dc.subjectComputer Science - Computation and Language
dc.titleIn Context Learning and Reasoning for Symbolic Regression with Large Language Models
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-6379-9206
dcterms.creatorhttps://orcid.org/0000-0002-0100-0227

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2410.17448v1.pdf
Size:
10.35 MB
Format:
Adobe Portable Document Format