The Summer 2026 program will run from June through August. It will be largest MATS program to date with 120 scholars and 100 mentors. Fellows will be connected with mentors or organizational research groups, such as Anthropic's Alignment Science team, UK AISI, Redwood Research, ARC, and LawZero, to collaborate on a research project over the summer. Some fellows will be offered a 6+ month extension to continue this collaboration.

Key dates for the application and admissions timeline
General Application (December 16th to January 18th)
Applicants fill out a general application which should take 1-2 hours. Applications are due by January 18th.
Additional Evaluations (Late January through March)
Applicants that are advanced in the applications process go through additional evaluations including reference checks, coding tests, work tests, and interviews. Which evaluations you will undergo depend on the mentors and streams you apply to.
Admissions Decisions (Early April)
Selected applicants are notified of their acceptance and anticipated mentor later in the application cycle.
The main program takes place from early June to late August of 2026. It is an intensive research phase, where fellows work full time on a research project in AI alignment, security, or governance. Fellows' research directions will typically be chosen through a collaborative process with their mentors, and fellows are expected to develop their independent research direction as the program continues.
While mentor support will vary depending on the project and mentors, mentors are expected to spend at least 1 hour/week working with each of their scholars, and some spend much more time. Scholars will also receive support from MATS’s Research Management team, who help to scope out and structure research direction.
Approximately one month into the program, scholars are expected to write a short Research Plan outlining their projects’ threat model, theory of change, and project deliverables. At the end of the program scholars will give a brief presentation at the Scholar Symposium on project work conducted over the course of MATS.
Educational seminars and workshops will be held 2-3 times per week. Previously, speakers have included Buck Shlegeris from Redwood Research, Adam Gleave from FAR AI, Neel Nanda from Google DeepMind, William Saunders from OpenAI, Andrew Critch from CHAI, Lennart Heim from GovAI, Ajeya Cotra from Open Philanthropy, and more.
The extension phase starts in September of 2026. Scholars who demonstrate promise as independent researchers during the main program can apply for the MATS extension phase. Acceptance into the extension is based on evaluation of scholars' research plans by an independent technical program committee and mentor endorsement.
The extension phase offers a default 6-month continuation, with exceptional scholars eligible for a 12-month Fellowship. Beginning four weeks after the end of the main program (with flexible start dates), extension scholars primarily work from Berkeley, California, the MATS London office, other AI safety hubs, or fully remotely.
MATS arranges funding for stipends, housing, and compute resources for accepted extension scholars, creating a seamless transition into this advanced phase of the program. Historically around 70% of scholars are accepted into the extension.
MATS aims to accelerate researchers who will:
MATS alumni have gone on to publish safety research, join alignment organizations, including Anthropic and MIRI, and found an alignment research lab. You can read more about MATS alumni here.
MATS supports researchers in a variety of research tracks, which includes technical governance, empirical, policy & strategy, theory, and compute governance. MATS fellows participate in a research stream consisting of their mentor(s) and other mentees. You can specify which tracks and streams to apply to in the general application. Each stream provides its own research agenda, methodology, and mentorship focus.
Roger Grosse’s stream investigates how to improve influence functions and other training data attribution methods, and uses these tools to study alignment-related phenomena such as out-of-context reasoning and emergent misalignment. The ideal scholar has experience with LLM internals, strong statistics/applied math skills (especially numerical linear algebra), and can independently drive research from literature review through experimentation and analysis. Roger provides shovel-ready projects while giving exceptional scholars freedom to pursue their own ideas, and is open to scholars collaborating with others.
I will meet with scholars 1 hour per week by default, and will be available to answer questions on Slack roughly daily.
I will give the scholar the level of freedom they are ready for. I will be prepared with focused, shovel-ready projects, but exceptional scholars with a vision they are excited about will have the flexibility to pursue it.
We build scalable technology for AI understanding and oversight.
You will work closely with a mentor through recurring meetings (group and individual) and Slack.
We're looking for strong, experienced software engineers or talented researchers who can hit the ground running and iterate quickly.
ML experience is a bonus but not required.
We will talk through project ideas with scholar
The stream will advance empirical methodologies for third-party AI safety evaluations. Example research topics include chain-of-thought monitorability, the secret loyalties research agenda, and automatic auditing (eg, with Anthropic’s Parallel Exploration Tool for Risky Interactions).
I will have scholars work in teams. During the week, the scholars will collaborate with each other and are encouraged to meet frequently. I will hold a weekly advising meeting for each project to provide help and guidance.
My projects will be prioritizing impact on the field of AI safety over academic novelty. Beyond the skills of doing empirical AI safety research, I am looking for collaborators who are excited about doing sound and impactful science - including the mundane aspects of doing good science.
I will provide a list of projects, but can also talk through other project ideas
The stream will focus on conceptual, empirical, and theoretical work on scalable oversight and control. This includes but is not limited to creating model organisms for specific failure modes, designing training procedures against them, and making progress on subproblems involved in safety cases.
A research agenda document will be shared ahead of time with a short list of project ideas. The scholars can also brainstorm and pitch ideas that are aligned with the research agenda. We will decide on assignments in week 2.
I (Cas) work on a range of projects from technical safeguards to technical governance. This stream follows an academic collaboration model and will work will likely focus on technical topics in AI governance.
2-3 meetings per week plus regular messaging and collaborative writing.
Green flags include:
Mentor(s) will talk through project ideas with scholar.
In the shard theory stream, we create qualitatively new methods and fields of inquiry, from steering vectors to gradient routing to unsupervised capability elicitation to robust unlearning. If you're theory-minded, maybe you'll help us formalize shard theory itself.
We will have weekly 1-1's and weekly team lunch, as well as asynchronous communication over Slack. Mentees are always welcome to reach out at any time, in case guidance is needed outside of usual meeting times.
Scholars should mostly figure things out on their own outside of meetings
Ideal candidates would have:
Mentor(s) will talk through project ideas with scholar
I mostly interested in AI control and scalable oversight. I'm excited to work with scholars interested in empirical projects building and evaluating control measures and oversight techniques for LLM agents, especially those based on chain of thought monitoring. I'm also interested in the science of chain of thought monitorability, misalignment and control. An ideal project ends with a paper submitted to NeurIPS/ICML/ICLR.
I'll meet with mentees once a week and will be available on Slack daily.
An ideal mentee has a strong AI research and/or software engineering background. A mentee can be a PhD student and they can work on a paper that will be part of their thesis.
I'll talk through project ideas with scholar
This stream is for the UK AISI Red-team. The team focuses on stress-testing mitigations for AI risk, including misuse safeguards, control techniques and model alignment red-teaming. We plan to work on projects building and improving methods for performing these kinds of evaluations and methods.
Each scholar will have one primary mentor from the Red Team who will provide weekly guidance and day-to-day support
Scholars will also have access to secondary advisors within their specific sub-team (misuse, alignment, or control) for technical deep-dives
Team lead Xander Davies and advisors Geoffrey Irving and Yarin Gal will provide periodic feedback through team meetings and project reviews
For scholars working on cross-cutting projects, we can arrange mentorship from multiple sub-teams as needed
Structure:
Weekly 1:1 meetings (60 minutes) with primary mentor for project updates, technical guidance, and problem-solving
Asynchronous communication via Slack/email throughout the week for quick questions and feedback
Bi-weekly team meetings where scholars can present work-in-progress and get broader team input
Working style:
We expect scholars to work semi-independently – taking initiative on their research direction while leveraging mentors for guidance on technical challenges, research strategy, and navigating AISI resources
Scholars will have access to our compute resources, pre-release frontier models, and operational support to focus on research
We encourage scholars to document their work and, if appropriate, aim for publication or public blog posts
We're looking for scholars with hands-on experience in machine learning and AI security, particularly those interested in adversarial robustness, red teaming, or AI safeguards. Ideal candidates would have:
We welcome scholars at various career stages especially those who are eager to work on problems with direct impact on how frontier AI is governed and deployed.
Scholars will choose from a set of predefined project directions aligned with our current research priorities, such as:
We'll provide initial direction and guidance on project scoping, then scholars will have autonomy to explore specific approaches within that framework.
Expect weekly touchpoints to ensure progress and refine directions.
If mentees have particular ideas they're excited about that they see as fitting within the scope of the team's work, they're welcome to propose them, but there is no guarantee they will be selected
Conceptual research on deceptive alignment, designing scheming propensity evaluations and honeypots. The stream will run in person in London, with scholars working together in team(s).
During the program, we will meet once a week to go through any updates / results, and your plans for the next week. I'm also happy to comment on docs, respond on Slack, or have additional ad hoc meetings as needed.
I will talk through project ideas with scholars
This stream will work on gathering and analyzing data in order to shed light on the driving forces behind AI and monitor its impacts.
Scholars will have individual weekly meetings for half an hour with their mentor, as well as a group meeting with their mentor for half an hour. Additionally, scholars will attend Epoch’s weekly Work In Progress meeting.
Some useful characteristics (don't need all of these):
Scholar will pick from a list of projects
MATS Research phase provides scholars with a community of peers.

Scholars work out of a shared office and are supported by the Community Team.
MATS alumni report that the connections with peers that they made during MATS have had the largest impact on them years later. Our full-time Community Team works to facilitate these connections and also provide general well-being support. Weekly lightning talks, scholar-led discussion groups, game nights, and outings to SF are some examples of MATS events.