MATS Fellow:
Eoin Farrell, Yeu-Tong Lau
Authors:
Eoin Farrell, Yeu-Tong Lau, Arthur Conmy
Citations
Abstract:
We investigate whether sparse autoencoders (SAEs) can be used to remove knowledge from language models. We use the biology subset of the Weapons of Mass Destruction Proxy dataset and test on the gemma-2b-it and gemma-2-2b-it language models. We demonstrate that individual interpretable biology-related SAE features can be used to unlearn a subset of WMDP-Bio questions with minimal side-effects in domains other than biology. Our results suggest that negative scaling of feature activations is necessary and that zero ablating features is ineffective. We find that intervening using multiple SAE features simultaneously can unlearn multiple different topics, but with similar or larger unwanted side-effects than the existing Representation Misdirection for Unlearning technique. Current SAE quality or intervention techniques would need to improve to make SAE-based unlearning comparable to the existing fine-tuning based techniques.
What Should Frontier AI Developers Disclose About Internal Deployments?
Authors:
Jacob Charnock, Raja Moreno, Justin Miller, William L. Anderson
Date:
April 24, 2026
Citations:
Where is the Mind? Persona Vectors and LLM Individuation
Authors:
Pierre Beckmann
Date:
April 20, 2026
Citations:
The MATS Program is an independent research and educational initiative connecting emerging researchers with mentors in AI alignment, governance, and security.
Each MATS cohort runs for 12 weeks in Berkeley, California, followed by an optional 6–12 month extension in London for selected scholars.