When LLMs Stop Following Steps: A Diagnostic Study of Procedural Execution in Language Models
A diagnostic benchmark on 14 models shows procedural execution accuracy drops from 61% to 20% as procedure length increases from 5 to 95 steps, revealing LLMs do not faithfully execute prompts.
Excerpt
Large language models (LLMs) often achieve strong performance on reasoning benchmarks, but final-answer accuracy alone does not show whether they faithfully execute the procedure specified in a prompt. We study this question through a controlled diagnostic benchmark for procedural execution, where models are given a step-wise arithmetic algorithm and two numeric inputs, and must return the final computed value. The benchmark uses simple arithmetic operations but increases complexity through algorithm length and look-back dependencies over intermediate variables. Across 14 models and 55 datasets, average first-answer accuracy drops from 61% on 5-step procedures to 20% on 95-step procedures. Generation-level analysis shows that failures often involve missing answers, premature answers, self-correction after an initial error, under-executed traces, and hallucinated extra steps. These findings suggest that apparent reasoning ability can mask substantial weaknesses in faithful instruction execution.
Read at source: https://arxiv.org/abs/2605.00817v1