The Summer 2022 cohort was MATS's first full-scale program, with 31 scholars and 7 mentors from leading AI safety organizations including OpenAI, ARC, MIRI, EleutherAI, and Aligned AI. The program ran 5 weeks online followed by 8 weeks in-person in Berkeley, where scholars conducted independent research under expert mentorship, participated in educational seminars, and built community with peers in the Berkeley AI safety ecosystem.

MATS supports researchers in a variety of research tracks, which includes technical governance, empirical, policy & strategy, theory, and compute governance. MATS fellows participate in a research stream consisting of their mentor(s) and other mentees. You can specify which tracks and streams to apply to in the general application. Each stream provides its own research agenda, methodology, and mentorship focus.
SolidGoldMagikarp (plus, prompt generation)
Anomalous tokens: a mysterious failure mode for GPT (which reliably insulted Matthew)
Authors:
Jessica Cooper (Rumbelow), Matthew Watkins
Jessica Cooper, Matthew Watkins
Date:
Dec 14, 2025
Citations:
22
A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations
Universality is a key hypothesis in mechanistic interpretability -- that different models learn similar features and circuits when trained on similar tasks. In this work, we study the universality hypothesis by examining how small neural networks learn to implement group composition. We present a novel algorithm by which neural networks may implement composition for any finite group via mathematical representation theory. We then show that networks consistently learn this algorithm by reverse engineering model logits and weights, and confirm our understanding using ablations. By studying networks of differing architectures trained on various groups, we find mixed evidence for universality: using our algorithm, we can completely characterize the family of circuits and features that networks learn on this task, but for a given network the precise circuits learned -- as well as the order they develop -- are arbitrary.
Authors:
Bilal Chughtai
Bilal Chughtai, Lawrence Chan, Neel Nanda
Date:
Dec 14, 2025
Citations:
123
The MATS Program is supported by a diverse and highly respected group of mentors — top-tier researchers, engineers, and thinkers working across AI alignment, governance, interpretability, and security.
MATS Research phase provides scholars with a community of peers.

Scholars work out of a shared office and are supported by the Community Team.
MATS alumni report that the connections with peers that they made during MATS have had the largest impact on them years later. Our full-time Community Team works to facilitate these connections and also provide general well-being support. Weekly lightning talks, scholar-led discussion groups, game nights, and outings to SF are some examples of MATS events.