MATS Summer 2026

The Summer 2026 program will run from June through August. It will be largest MATS program to date with 120 scholars and 100 mentors. Fellows will be connected with mentors or organizational research groups, such as Anthropic's Alignment Science team, UK AISIRedwood ResearchARC, and LawZero, to collaborate on a research project over the summer. Some fellows will be offered a 6+ month extension to continue this collaboration.

Program phases

Key dates for the application and admissions timeline

1. Applications

General Application (December 16th to January 18th) 

Applicants fill out a general application which should take 1-2 hours. Applications are due by January 18th.

Additional Evaluations (Late January through March)

Applicants that are advanced in the applications process go through additional evaluations including reference checks, coding tests, work tests, and interviews. Which evaluations you will undergo depend on the mentors and streams you apply to.

Admissions Decisions (Early April)
Selected applicants are notified of their acceptance and anticipated mentor later in the application cycle.

2. Main Program
3. Extension Phase
4. Post-program

Summer 2026 Streams

MATS supports researchers in a variety of research tracks, which includes technical governance, empirical, policy & strategy, theory, and compute governance. MATS fellows participate in a research stream consisting of their mentor(s) and other mentees. You can specify which tracks and streams to apply to in the general application. Each stream provides its own research agenda, methodology, and mentorship focus. 

Toronto
Empirical
Interpretability

Roger Grosse’s stream investigates how to improve influence functions and other training data attribution methods, and uses these tools to study alignment-related phenomena such as out-of-context reasoning and emergent misalignment. The ideal scholar has experience with LLM internals, strong statistics/applied math skills (especially numerical linear algebra), and can independently drive research from literature review through experimentation and analysis. Roger provides shovel-ready projects while giving exceptional scholars freedom to pursue their own ideas, and is open to scholars collaborating with others.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Interpretability, Monitoring, Dangerous Capability Evals

We build scalable technology for AI understanding and oversight.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Control, Dangerous Capability Evals, Red-Teaming, Monitoring, Model Organisms, Safeguards

The stream will advance empirical methodologies for third-party AI safety evaluations. Example research topics include chain-of-thought monitorability, the secret loyalties research agenda, and automatic auditing (eg, with Anthropic’s Parallel Exploration Tool for Risky Interactions).

Mentorship structure
Desired scholar characteristics
Fellow project selection process
New York City
Empirical
Control, Scalable Oversight, Red-Teaming, Model Organisms, Monitoring

The stream will focus on conceptual, empirical, and theoretical work on scalable oversight and control. This includes but is not limited to creating model organisms for specific failure modes, designing training procedures against them, and making progress on subproblems involved in safety cases.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
Boston
Technical Governance
Adversarial Robustness, Policy & Governance, Red-Teaming, Safeguards

I (Cas) work on a range of projects from technical safeguards to technical governance. This stream follows an academic collaboration model and will work will likely focus on technical topics in AI governance. 

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Interpretability, Agent Foundations

In the shard theory stream, we create qualitatively new methods and fields of inquiry, from steering vectors to gradient routing to unsupervised capability elicitation to robust unlearning. If you're theory-minded, maybe you'll help us formalize shard theory itself.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Empirical
Control, Monitoring, Dangerous Capability Evals

I mostly interested in AI control and scalable oversight. I'm excited to work with scholars interested in empirical projects building and evaluating control measures and oversight techniques for LLM agents, especially those based on chain of thought monitoring. I'm also interested in the science of chain of thought monitorability, misalignment and control. An ideal project ends with a paper submitted to NeurIPS/ICML/ICLR.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
London
Empirical
Monitoring, Adversarial Robustness, Control, Model Organisms, Red-Teaming, Dangerous Capability Evals, Safeguards

This stream is for the UK AISI Red-team. The team focuses on stress-testing mitigations for AI risk, including misuse safeguards, control techniques and model alignment red-teaming. We plan to work on projects building and improving methods for performing these kinds of evaluations and methods.

Mentorship structure
Desired scholar characteristics
Fellow project selection process
London
Empirical
Scheming & Deception, Dangerous Capability Evals, Control, Red-Teaming

Conceptual research on deceptive alignment, designing scheming propensity evaluations and honeypots. The stream will run in person in London, with scholars working together in team(s). 

Mentorship structure
Desired scholar characteristics
Fellow project selection process
SF Bay Area
Compute Infrastructure
Compute Infrastructure

This stream will work on gathering and analyzing data in order to shed light on the driving forces behind AI and monitor its impacts.

Mentorship structure
Desired scholar characteristics
Fellow project selection process

Related research

No items found.

Community at MATS

MATS Research phase provides scholars with a community of peers.

Scholars work out of a shared office and are supported by the Community Team.

MATS alumni report that the connections with peers that they made during MATS have had the largest impact on them years later. Our full-time Community Team works to facilitate these connections and also provide general well-being support. Weekly lightning talks, scholar-led discussion groups, game nights, and outings to SF are some examples of MATS events.