ICML 2026

DecAEvolve: Decompose, Adapt, Evolve
Three Pillars of LLM-based Equation Discovery

Pouya Behzadifar*1, Parshin Shojaee*2, Sanchit Kabra*2, Kazem Meidani3,4, Chandan K Reddy2

1Sharif University of Technology  ·  2Virginia Tech  ·  3Capital One  ·  4Carnegie Mellon University

*Equal contribution

Decompose, Adapt, Evolve illustration

TL;DR

Equation discovery as a closed loop, not a static prompt.

DecAEvolve unifies three mechanisms inside an LLM-guided evolutionary search: decomposition turns a scalar reward into per-term contribution feedback; GRPO test-time adaptation distills the observed system into the policy through LoRA updates; evolution stores annotated programs in a multi-island buffer so future prompts condition on why things worked.

Decompose

Parse each program into an AST, isolate atomic terms, and quantify single-term and pairwise contributions via re-optimized ablations.

Adapt

Run GRPO with group-relative advantages and a KL anchor on LoRA adapters — test-time RL that aligns the policy with the data.

Evolve

Insert decomposition-annotated programs into a multi-island experience buffer that seeds the next prompt with structural evidence.

Motivation

Where LLM-based equation discovery breaks.

Existing LLM-SR systems treat the LLM as a fixed hypothesis generator and a scalar MSE as the only feedback. Two of the strongest signals available to the search are simply discarded.

The LLM never sees the system

Across thousands of search candidates, the policy weights are frozen. Whatever priors the model entered with — biased toward textbook oscillators, Michaelis–Menten kinetics, polynomial constitutive laws — are the priors it leaves with. There is no mechanism to internalize that this oscillator has a non-standard damping term, or that this growth curve breaks below a pH threshold.

Scalar rewards hide structure

A single MSE number says "good" or "bad" but not why. If a candidate scores well, was it the cubic term, the sinusoidal interaction, or the additive bias? Without that attribution, the next prompt can only retry whole equations and hope.

Working hypothesis. Discovery improves on two axes simultaneously when (i) the policy is updated online with reward-weighted gradients so its distribution drifts toward the observed system, and (ii) every reward is decomposed back to the symbolic components that produced it.

Framework

One iteration of DecAEvolve.

Generate a group of candidate programs → score them with BFGS-fit parameters → decompose the survivors into atoms and attribute credit → update the policy with GRPO using the resulting rewards → push annotated programs into a multi-island buffer that seeds the next prompt.

DecAEvolve framework overview
Framework overview. The Adapt branch (pink) sends group-relative rewards back to the LLM via GRPO. The Decompose branch (blue) parses each program into an AST, ablates terms with re-optimization, and serializes contribution scores as inline comments before the program re-enters the population. Decomposition feeds adaptation; adaptation feeds evolution; evolution feeds decomposition.

Method

Three coupled mechanisms.

Stage 1 (adaptation): for N iterations, generate, score, decompose, and update the LLM with GRPO. Stage 2 (search): freeze the adapted policy and run T more iterations of decomposition-guided evolutionary search.

1 — Decompose: programs → structured feedback

Each candidate function body is parsed with Python's ast module, intermediate assignments are inlined into the return expression, and the AST is traversed under three rules:

Split additive structure

Top-level + / become term boundaries. The program is exposed as a linear combination of atoms um(x).

Preserve multiplicative subtrees

Products, divisions, and powers stay intact: p₀·sin(x)·x² is one atom. Operator precedence is respected.

Function calls are atoms

Unary operators and library functions (sin, exp, np.abs, …) are leaves of the atom set.

For each atom um, DecAEvolve constructs an ablated program f\um, re-optimizes the remaining BFGS parameters on the data, and computes a marginal contribution:

Δum  =  Score𝒯(f, 𝒟)  −  Score𝒯(f\um, 𝒟)

The same construction over pairs (um, un) yields interaction contributions Δum,un, exposing redundancy versus synergy. Re-optimization is essential — with frozen parameters, removing a term can look catastrophic only because the rest of the equation was never allowed to compensate. Contributions are serialized as inline Python comments above the return statement, so executable semantics are unchanged but the next prompt now reads structural evidence.

Inline annotations on a candidate equation
A real annotation. A damped nonlinear-oscillator candidate decorated with single-term and pairwise ablation deltas before being written back to the experience buffer. The LLM that reads this in a future prompt sees that params[0]·x dominates, that params[1]·v contributes almost nothing alone, and that the (params[0]·x, params[2]) pair carries even more signal than params[0]·x on its own.

2 — Adapt: GRPO at test time, with LoRA

Adaptation is reinforcement learning over a deterministic MDP whose state is a (prompt, prefix) pair, action is the next token, and reward is the bounded validation score r(p, h) = exp(−MSE(h, 𝒟)) (floored at 0.01 for invalid completions). Each prompt samples G=64 completions, GRPO computes a per-prompt baseline b(p) and group-relative advantages Ai = rib(p), and optimizes:

ℒ(θ) = − 𝔼p, hi [  (1/G) Σi (1/|hi|) Σt   min( ri,t Ai,  clip(ri,t, 1−ε, 1+ε) Ai )  +  β · KL(πθ ‖ πref)  ]

LoRA adapters (rank 16, α=16, dropout 0.05) carry all trainable parameters; the base model stays as the frozen reference πref. β=0.05 regularizes drift; together with the per-prompt baseline this produces low-variance, single-step updates that are safe to take during search itself.

G = 64 temp 0.8 · top-p 0.9 LoRA r=16, α=16 β = 0.05 Adam, lr 1e-6 batch 16 × accum 4 200 warmup steps

3 — Evolve: annotated populations, multi-island buffers

The buffer 𝒫 = ⋃i 𝒫(i) is sharded into independent islands that diverge over time and prevent premature convergence. A new program is admitted to its source island only if it strictly beats that island's best score. Every 4 hours the worst-performing half of islands is overwritten with copies from a surviving island. Within an island, programs are clustered by score signature, sampled by Boltzmann weights with an annealing temperature, and ranked within a cluster by length-and-score. Each prompt is a hierarchical sample — island → cluster → program — concatenated into a structured few-shot context. The few-shot examples carry their decomposition annotations, so the model conditions on component-level success.

Experimental setup

Four scientific benchmarks, six open backbones.

Each dataset ships with predefined train, in-domain (ID) test, and out-of-domain (OOD) test splits. We report normalized MSE — Σ(ŷ−y)² / Σ(y−ȳ)² — averaged over five runs. Search budget: 3,000 LLM calls per problem, BFGS via SciPy with a 30s per-hypothesis timeout.

Oscillator 1 & 2

Damped second-order ODEs in displacement and velocity, structured to deviate from textbook spring–mass forms.

E. coli growth

Biological growth as a function of density, substrate, temperature, and pH — coupled nonlinearities not in standard recall.

Stress–strain

Real experimental tensile-response data for aluminum across temperatures. No closed-form ground truth.

Backbones

Llama-3.2 (1B, 3B), Llama-3.1-8B, Qwen2.5 (1.5B, 3B, 7B). All open-source, no proprietary models in DecAEvolve.

Baselines. Classical SR (GPlearn, PySR, SINDy) · Deep / neural SR (DSR, uDSR, NeSymReS, E2E) · LLM-SR with both proprietary backbones (Mixtral, GPT-3.5-turbo) and the same six open-source backbones used by DecAEvolve, under matched LLM-call budgets.

Results

Up to two orders of magnitude lower OOD error.

DecAEvolve consistently wins on the OOD splits — the regime where static search overfits or stalls — and the gains transfer across all six backbones from 1.5B to 8B parameters.

Method Osc. 1 IDOsc. 1 OOD Osc. 2 IDOsc. 2 OOD E. coli IDE. coli OOD Stress IDStress OOD
Classical & neural SR baselines
GPlearn0.01550.55670.75513.1881.0811.0390.10630.4091
NeSymReS0.00470.53770.24880.6472N/A (d>3)0.79280.6377
E2E0.00820.37220.14010.19110.63211.44670.22620.5867
DSR0.00870.24540.05800.19450.94512.42910.33261.108
uDSR0.00030.00070.00320.00150.33225.45840.05020.1761
PySR9.0e-40.31062.0e-40.00980.03761.01410.03310.1304
SINDy0.98880.70974.62e-161.45e-81.0781.0390.07813.52e15
LLM-SR with proprietary backbones
LLM-SR (Mixtral)7.89e-80.00020.00300.02910.00260.00370.01620.0946
LLM-SR (GPT-3.5-turbo)4.65e-70.00052.12e-73.81e-50.02140.02640.02100.0516
LLM-SR with open backbones
LLM-SR (Llama-3.2-1B)0.00030.11210.01050.05430.01330.35440.09340.3821
LLM-SR (Llama-3.2-3B)1.41e-50.00140.00210.00530.01220.05880.06290.1672
LLM-SR (Llama-3.1-8B)1.36e-50.00094.61e-60.00010.01170.02400.03760.0761
LLM-SR (Qwen2.5-1.5B)0.00110.12330.00270.07210.72379.94830.12490.2435
LLM-SR (Qwen2.5-3B)0.00030.01680.00180.04320.01350.80110.09050.2085
LLM-SR (Qwen2.5-7B)1.33e-50.00170.00020.00110.01090.12850.04230.1851
DecAEvolve (ours)
DecAEvolve (Llama-3.2-1B)2.09e-50.00110.00180.01360.01140.06980.07040.0924
DecAEvolve (Llama-3.2-3B)1.57e-60.00040.00030.00050.00740.01020.03110.0358
DecAEvolve (Llama-3.1-8B)1.37e-60.00023.64e-72.11e-50.00190.00450.01440.0322
DecAEvolve (Qwen2.5-1.5B)0.00010.07841.22e-60.00120.67199.92110.09160.1134
DecAEvolve (Qwen2.5-3B)3.23e-60.00024.36e-50.00080.01150.04540.04870.1612
DecAEvolve (Qwen2.5-7B)1.25e-61.51e-58.06e-71.64e-50.00070.00120.01980.0322

Normalized MSE (lower is better), averaged over five runs; column-best in accent. SINDy wins Oscillator 2 because its assumed sparse library happens to match the system — and pays for that prior with a 3.5×1015 blow-up on real materials data, illustrating why baked-in libraries are not free.

Discovery trajectories across datasets and backbones
Discovery trajectories. Best-so-far normalized MSE versus number of evaluated candidates, for all four datasets and all six backbones. Gray = LLM-SR baseline; pink = +GRPO only; blue = +Decomp only; dark = full DecAEvolve. The combined dark curve is typically both the steepest early drop (better sample efficiency) and the lowest plateau (better terminal accuracy).
GRPO reward improvement curves
GRPO actually learns. Per-step reward during the adaptation stage across all 24 (model × dataset) combinations. Curves rise smoothly and saturate near 1.0, confirming that the bounded reward and KL anchor produce stable test-time updates rather than mode collapse.

What the results tell us

Five take-aways.

Decomposition is search signal, not explanation

The term-level deltas are not after-the-fact interpretability — they are the next prompt's content. Prior successes become reusable at the granularity of building blocks, so the search recombines components instead of regenerating whole equations.

Test-time RL aligns priors with the system

Distilling the data distribution into the policy at search time turns a fixed prior into a posterior. Smaller open-source models with this loop equal or beat much larger general-purpose backbones running classic LLM-SR.

The two pieces compose

Decomposition makes rewards informative; adaptation makes informative rewards actionable through the weights. Either alone is an improvement; together they remove the dominant failure mode of static, scalar-feedback search.

Re-optimized ablations matter

Credit assignment under frozen parameters is biased — coupled systems mask true contribution. Refitting on every ablation is what makes the Δ values trustworthy enough to feed back into both the prompt and the reward.

Real data, not just synthetic recall

The largest jumps are on OOD splits and the experimental stress–strain dataset — settings where memorized scientific forms fail. The framework gains most exactly where existing methods are weakest.

Limitations & open directions

Decomposition currently reasons about additive structure; deeper hierarchical reflection over multiplicative subtrees, richer search-space optimizers, and broader program-synthesis tasks with highly-correlated components are natural extensions.

Cite

BibTeX

@inproceedings{behzadifar2026decaevolve,
  title     = {DecAEvolve: Decompose, Adapt, and Evolve, or, Three Pillars
               of Effective LLM-based Scientific Equation Discovery},
  author    = {Behzadifar, Pouya and Shojaee, Parshin and Kabra, Sanchit
               and Meidani, Kazem and Reddy, Chandan K},
  booktitle = {Proceedings of the 43rd International Conference on Machine Learning},
  address   = {Seoul, South Korea},
  publisher = {PMLR},
  volume    = {306},
  year      = {2026}
}