Biases in the Blind Spot: Detecting What LLMs Fail to Mention

MATS Fellow:

Iván Arcuschin Moreno

Authors:

Iván Arcuschin, David Chanin, Adrià Garriga-Alonso, Oana-Maria Camburu

Citations

0 Citations

Abstract:

Large Language Models (LLMs) often provide chain-of-thought (CoT) reasoning traces that appear plausible, but may hide internal biases. We call these *unverbalized biases*. Monitoring models via their stated reasoning is therefore unreliable, and existing bias evaluations typically require predefined categories and hand-crafted datasets. In this work, we introduce a fully automated, black-box pipeline for detecting task-specific unverbalized biases. Given a task dataset, the pipeline uses LLM autoraters to generate candidate bias concepts. It then tests each concept on progressively larger input samples by generating positive and negative variations, and applies statistical techniques for multiple testing and early stopping. A concept is flagged as an unverbalized bias if it yields statistically significant performance differences while not being cited as justification in the model's CoTs. We evaluate our pipeline across six LLMs on three decision tasks (hiring, loan approval, and university admissions). Our technique automatically discovers previously unknown biases in these models (e.g., Spanish fluency, English proficiency, writing formality). In the same run, the pipeline also validates biases that were manually identified by prior work (gender, race, religion, ethnicity). More broadly, our proposed approach provides a practical, scalable path to automatic task-specific bias discovery.

Recent research

The 2025 AI Agent Index

Authors:

Leon Staufer, Mick Yang

Date:

February 20, 2026

Citations:

0

Frequently asked questions

What is the MATS Program?
How long does the program last?