Contextual Linear Activation Steering of Language Models

· ArXiv · AI/CL/LG ·

CLAS dynamically adapts steering strength per-token based on context, consistently outperforming fixed linear activation steering and matching ReFT/LoRA performance with limited labeled data.

Categories: Research

Excerpt

Linear activation steering is a powerful approach for eliciting the capabilities of large language models and specializing their behavior using limited labeled data. While effective, existing methods often apply a fixed steering strength to all tokens, resulting in inconsistent steering quality across diverse input prompts. In this work, we introduce Contextual Linear Activation Steering (CLAS), a method that dynamically adapts linear activation steering to context-dependent steering strengths. Across eleven steering benchmarks and four model families, it consistently outperforms standard linear activation steering and matches or exceeds the performance of ReFT and LoRA in settings with limited labeled data. We therefore propose CLAS as a scalable, interpretable, and accurate method for specializing and steering large language models.