Neel Nanda stream applications due August 29
All other stream applications are expected to open in late August. See the current confirmed list of Winter 2026 mentors below.
Application process
Steps in the application process
Create an application account. You’ll use this to access all applications materials.
Once open, submit a MATS pre-application. This is required by all streams.
Submit applications to the MATS stream(s) you want to work with. You must submit at least one stream-specific application for your MATS application to be considered. You can and should apply to all of the MATS streams that interest you! See a list of past streams and their applications below.
Complete additional evaluations. Depending on the streams you apply to, you may be required to complete a coding screen, interviews, or other evaluations after submitting your application. The process, however, is not standardized between streams; not being contacted for an interview does not necessarily mean that your application is not in consideration.
Tips for applying
Make sure to check your spam folders for emails! You may wish to automatically filter for emails from applications@matsprogram.org to ensure you don’t miss any emails.
Submit your application materials early. In the past, some applicants have had technical problems in the hour leading up to the application deadline.
Mentors will primarily evaluate candidates based on the submission of their own stream-specific applications, though all mentors will have access to application materials submitted to other streams.
-
Owain Evans
Neel Nanda
Buck Shlegeris
Ethan Perez
Yonadav Shavit
Samuel Marks
Stephen McAleer
Joe Benton
Luca Righetti
Alan Chan
Roger Grosse
Erik Jenner
Mary Phuong
Stephen Casper
Thomas Larsen
Michael Chen
David Lindner
Oliver Ethan Richardson
Daniel Kang
Roland S. Zimmermann
Victoria Krakovna
Tomek Korbak
Shi Feng
Marius Hobbhahn
Patrick Butlin
Francis Rhys Ward
Jan Betley
Andi Peng
Lee Sharkey
Micah Carroll
Sebastien Krier
Arthur Conmy
Adam Shai
Michael Dennis
Paul M. Riechers
Romeo Dean
Tom Davidson
Alex Turner
Nina Panickssery
Scott Emmons
Daniel Murfet
Eli Lifland
Rose Hadshar
Alex Cloud
Jesse Hoogland
Keri Warr
Summer 2025 Tracks
Click on each track title to read a brief description.
-
As model develop potential dangerous behaviors, can we develop and evaluate methods to monitor and regulate AI systems, ensuring they adhere to desired behaviors while minimally undermining their efficiency or performance?
-
Many stories of AI accident and misuse involve potentially dangerous capabilities, such as sophisticated deception and situational awareness, that have not yet been demonstrated in AI. Can we evaluate such capabilities in existing AI systems to form a foundation for policy and further technical work?
-
Rigorously understanding how ML models function may allow us to identify and train against misalignment. Can we reverse engineer neural nets from their weights, or identify structures corresponding to “goals” or dangerous capabilities within a model and surgically alter them?
-
As AI systems continue to advance and develop even stronger capabilities, can we develop policies, standards, and frameworks to guide the ethical development, deployment, and regulation of AI technologies, focusing on ensuring safety and societal benefit?
-
As models continue to scale, they become more agentic and, as such, we need methods to study their newfound agency. How do we study modeling optimal agents, how those agents interact with each other, and how some agents can be aligned with each other?
-
As AI systems become more capable, they will become higher-value targets for theft, and more able to undermine cybersecurity protocols. How do we ensure that the weights of valuable ML models remain under the control of developers, and that AI improves rather than degrades the state of cybersecurity?