Automatically Finding Reward Model Biases

MATS Fellow:

Atticus Wang

Authors:

Atticus Wang, Iván Arcuschin, Arthur Conmy

Citations

Citations

Abstract:

Reward models are central to large language model (LLM) post-training. However, past work has shown that they can reward spurious or undesirable attributes such as length, format, hallucinations, and sycophancy. In this work, we introduce and study the research problem of automatically finding reward model biases in natural language. We offer a simple approach of using an LLM to iteratively propose and refine candidate biases. Our method can recover known biases and surface novel ones: for example, we found that Skywork-V2-8B, a leading open-weight reward model, often mistakenly favors responses with redundant spacing and responses with hallucinated content. In addition, we show evidence that evolutionary iteration outperforms flat best-of-N search, and we validate the recall of our pipeline using synthetically injected biases. We hope our work contributes to further research on improving RMs through automated interpretability methods.

Recent research

The 2025 AI Agent Index

Authors:

Leon Staufer, Mick Yang

Date:

February 20, 2026

Citations:

0

Large-scale online deanonymization with LLMs

Authors:

Simon Lermen

Date:

February 18, 2026

Citations:

Frequently asked questions

What is the MATS Program?
How long does the program last?