A Single Neuron Is Sufficient to Bypass Safety Alignment in Large Language Models
Research demonstrates that single neurons in refusal and concept systems are causally sufficient to bypass or induce harmful content across 7 models from 1.7B to 70B parameters.
Excerpt
Hamid Kazemi, Atoosa Chegini, Maria Safi — Safety alignment in language models operates through two mechanistically distinct systems: refusal neurons that gate whether harmful knowledge is expressed, and concept neurons that encode the harmful knowledge itself. By targeting a single neuron in each system, we demonstrate both directions of failure -- bypassing safety on explicit harmful requests via suppression, and inducing harmful content from innocent prompts via amplification -- across seven models spanning two families and 1.7B to 70B parameters, without any training or prompt engineering. Our findings suggest that safety alignment is not robustly distributed across model weights but is mediated by individual neurons that are each causally sufficient to gate refusal behavior -- suppressing any one of the identified refusal neurons bypasses safety alignment across diverse harmful requests.
Read at source: https://arxiv.org/abs/2605.08513