Study: using weaker AI models to supervise a more capable model could prevent the stronger model from deliberately underperforming on benchmarks and evaluations (Emil Ryd/@emilaryd)

Techmeme ·

Researchers from MATS program, Redwood Research, and Anthropic show that weaker models can supervise capable models to prevent strategic sandbagging on benchmarks, addressing alignment challenges.

Categories: Research

Excerpt

<p><a href="https://www.techmeme.com/260506/p9#a260506p9" title="Techmeme permalink"><img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /></a> Emil Ryd / <a href="https://x.com/emilaryd">@emilaryd</a>:<br /> <span style="font-size: 1.3em;"><b><a href="https://x.com/emilaryd/status/2051697625179582606">Study: using weaker AI models to supervise a more capable model could prevent the stronger model from deliberately underperforming on benchmarks and evaluations</a></b></span>&nbsp; &mdash;&nbsp; New paper from MATS, Redwood, and Anthropic! If a capable model is strategically sandbagging, can we train it to stop when the only supervision we have comes from weaker models? We find that we can! Work done as part of the Anthropic-Redwood MATS stream. [image]</p>